Sep 13 00:19:53.958131 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:19:53.958155 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:19:53.958166 kernel: BIOS-provided physical RAM map: Sep 13 00:19:53.958173 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:19:53.958179 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:19:53.958185 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:19:53.958192 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 13 00:19:53.958199 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 13 00:19:53.958205 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:19:53.958214 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 00:19:53.958220 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:19:53.958227 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:19:53.958238 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 00:19:53.958245 kernel: NX (Execute Disable) protection: active Sep 13 00:19:53.958253 kernel: APIC: Static calls initialized Sep 13 00:19:53.958266 kernel: SMBIOS 2.8 present. Sep 13 00:19:53.958273 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 13 00:19:53.958279 kernel: Hypervisor detected: KVM Sep 13 00:19:53.958286 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:19:53.958293 kernel: kvm-clock: using sched offset of 3074179614 cycles Sep 13 00:19:53.958300 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:19:53.958307 kernel: tsc: Detected 2794.748 MHz processor Sep 13 00:19:53.958314 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:19:53.958322 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:19:53.958329 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 13 00:19:53.958339 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 00:19:53.958346 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:19:53.958352 kernel: Using GB pages for direct mapping Sep 13 00:19:53.958359 kernel: ACPI: Early table checksum verification disabled Sep 13 00:19:53.958366 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 13 00:19:53.958373 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:19:53.958380 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:19:53.958387 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:19:53.958397 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 13 00:19:53.958404 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:19:53.958410 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:19:53.958417 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:19:53.958424 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:19:53.958431 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 13 00:19:53.958438 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 13 00:19:53.958449 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 13 00:19:53.958458 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 13 00:19:53.958465 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 13 00:19:53.958473 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 13 00:19:53.958480 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 13 00:19:53.958497 kernel: No NUMA configuration found Sep 13 00:19:53.958504 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 13 00:19:53.958512 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 13 00:19:53.958522 kernel: Zone ranges: Sep 13 00:19:53.958529 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:19:53.958537 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 13 00:19:53.958544 kernel: Normal empty Sep 13 00:19:53.958551 kernel: Movable zone start for each node Sep 13 00:19:53.958559 kernel: Early memory node ranges Sep 13 00:19:53.958566 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:19:53.958573 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 13 00:19:53.958580 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 13 00:19:53.958590 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:19:53.958599 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:19:53.958606 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 00:19:53.958614 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:19:53.958650 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:19:53.958657 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:19:53.958665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:19:53.958672 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:19:53.958691 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:19:53.958702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:19:53.958710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:19:53.958717 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:19:53.958724 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:19:53.958731 kernel: TSC deadline timer available Sep 13 00:19:53.958738 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:19:53.958746 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:19:53.958753 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:19:53.958763 kernel: kvm-guest: setup PV sched yield Sep 13 00:19:53.958773 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 00:19:53.958780 kernel: Booting paravirtualized kernel on KVM Sep 13 00:19:53.958787 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:19:53.958795 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:19:53.958802 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 13 00:19:53.958809 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 13 00:19:53.958816 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:19:53.958823 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:19:53.958830 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:19:53.958841 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:19:53.958849 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:19:53.958856 kernel: random: crng init done Sep 13 00:19:53.958864 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:19:53.958871 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:19:53.958878 kernel: Fallback order for Node 0: 0 Sep 13 00:19:53.958885 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 13 00:19:53.958893 kernel: Policy zone: DMA32 Sep 13 00:19:53.958902 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:19:53.958910 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 136900K reserved, 0K cma-reserved) Sep 13 00:19:53.958917 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:19:53.958924 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:19:53.958932 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:19:53.958939 kernel: Dynamic Preempt: voluntary Sep 13 00:19:53.958946 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:19:53.958954 kernel: rcu: RCU event tracing is enabled. Sep 13 00:19:53.958962 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:19:53.958972 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:19:53.958979 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:19:53.958986 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:19:53.958994 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:19:53.959003 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:19:53.959011 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:19:53.959018 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:19:53.959025 kernel: Console: colour VGA+ 80x25 Sep 13 00:19:53.959032 kernel: printk: console [ttyS0] enabled Sep 13 00:19:53.959040 kernel: ACPI: Core revision 20230628 Sep 13 00:19:53.959050 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:19:53.959057 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:19:53.959064 kernel: x2apic enabled Sep 13 00:19:53.959071 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:19:53.959079 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 00:19:53.959086 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 00:19:53.959094 kernel: kvm-guest: setup PV IPIs Sep 13 00:19:53.959112 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:19:53.959120 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:19:53.959128 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 00:19:53.959135 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:19:53.959145 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:19:53.959153 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:19:53.959161 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:19:53.959168 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:19:53.959176 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:19:53.959186 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:19:53.959194 kernel: active return thunk: retbleed_return_thunk Sep 13 00:19:53.959204 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:19:53.959212 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:19:53.959219 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:19:53.959227 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 00:19:53.959235 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 00:19:53.959243 kernel: active return thunk: srso_return_thunk Sep 13 00:19:53.959253 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 00:19:53.959261 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:19:53.959268 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:19:53.959276 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:19:53.959284 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:19:53.959291 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:19:53.959299 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:19:53.959307 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:19:53.959314 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:19:53.959325 kernel: landlock: Up and running. Sep 13 00:19:53.959332 kernel: SELinux: Initializing. Sep 13 00:19:53.959340 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:19:53.959347 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:19:53.959355 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:19:53.959363 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:19:53.959371 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:19:53.959378 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:19:53.959388 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:19:53.959398 kernel: ... version: 0 Sep 13 00:19:53.959406 kernel: ... bit width: 48 Sep 13 00:19:53.959413 kernel: ... generic registers: 6 Sep 13 00:19:53.959421 kernel: ... value mask: 0000ffffffffffff Sep 13 00:19:53.959429 kernel: ... max period: 00007fffffffffff Sep 13 00:19:53.959436 kernel: ... fixed-purpose events: 0 Sep 13 00:19:53.959444 kernel: ... event mask: 000000000000003f Sep 13 00:19:53.959451 kernel: signal: max sigframe size: 1776 Sep 13 00:19:53.959459 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:19:53.959470 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:19:53.959477 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:19:53.959492 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:19:53.959499 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 00:19:53.959507 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:19:53.959514 kernel: smpboot: Max logical packages: 1 Sep 13 00:19:53.959522 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 00:19:53.959529 kernel: devtmpfs: initialized Sep 13 00:19:53.959537 kernel: x86/mm: Memory block size: 128MB Sep 13 00:19:53.959545 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:19:53.959556 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:19:53.959563 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:19:53.959571 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:19:53.959578 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:19:53.959586 kernel: audit: type=2000 audit(1757722792.512:1): state=initialized audit_enabled=0 res=1 Sep 13 00:19:53.959594 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:19:53.959601 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:19:53.959609 kernel: cpuidle: using governor menu Sep 13 00:19:53.959629 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:19:53.959640 kernel: dca service started, version 1.12.1 Sep 13 00:19:53.959648 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:19:53.959656 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 00:19:53.959664 kernel: PCI: Using configuration type 1 for base access Sep 13 00:19:53.959671 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:19:53.959679 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:19:53.959689 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:19:53.959697 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:19:53.959707 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:19:53.959715 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:19:53.959723 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:19:53.959730 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:19:53.959738 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:19:53.959745 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:19:53.959753 kernel: ACPI: Interpreter enabled Sep 13 00:19:53.959761 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:19:53.959768 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:19:53.959776 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:19:53.959786 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:19:53.959794 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:19:53.959802 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:19:53.960031 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:19:53.960169 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:19:53.960297 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:19:53.960307 kernel: PCI host bridge to bus 0000:00 Sep 13 00:19:53.960451 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:19:53.960582 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:19:53.960719 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:19:53.960840 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:19:53.960956 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:19:53.961071 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 00:19:53.961186 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:19:53.961350 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:19:53.961506 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:19:53.961654 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 13 00:19:53.961786 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 13 00:19:53.961922 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 13 00:19:53.962061 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:19:53.962225 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:19:53.962355 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 13 00:19:53.962483 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 13 00:19:53.962651 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 13 00:19:53.962809 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:19:53.962941 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:19:53.963070 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 13 00:19:53.963205 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 13 00:19:53.963354 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:19:53.963504 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 13 00:19:53.963653 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 13 00:19:53.963788 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 13 00:19:53.963915 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 13 00:19:53.964061 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:19:53.964195 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:19:53.964342 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:19:53.964472 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 13 00:19:53.964627 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 13 00:19:53.964773 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:19:53.964902 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 00:19:53.964917 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:19:53.964925 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:19:53.964933 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:19:53.964940 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:19:53.964948 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:19:53.964956 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:19:53.964963 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:19:53.964971 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:19:53.964982 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:19:53.964992 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:19:53.965000 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:19:53.965008 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:19:53.965015 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:19:53.965023 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:19:53.965031 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:19:53.965038 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:19:53.965046 kernel: iommu: Default domain type: Translated Sep 13 00:19:53.965054 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:19:53.965066 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:19:53.965077 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:19:53.965087 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:19:53.965096 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 13 00:19:53.965248 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:19:53.965378 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:19:53.965516 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:19:53.965528 kernel: vgaarb: loaded Sep 13 00:19:53.965540 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:19:53.965548 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:19:53.965556 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:19:53.965564 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:19:53.965572 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:19:53.965579 kernel: pnp: PnP ACPI init Sep 13 00:19:53.965819 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:19:53.965851 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:19:53.965860 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:19:53.965882 kernel: NET: Registered PF_INET protocol family Sep 13 00:19:53.965898 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:19:53.965908 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:19:53.965933 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:19:53.965950 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:19:53.965959 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:19:53.965967 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:19:53.965991 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:19:53.966004 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:19:53.966012 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:19:53.966019 kernel: NET: Registered PF_XDP protocol family Sep 13 00:19:53.966148 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:19:53.966282 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:19:53.966422 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:19:53.966553 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:19:53.966687 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:19:53.966804 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 00:19:53.966820 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:19:53.966827 kernel: Initialise system trusted keyrings Sep 13 00:19:53.966836 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:19:53.966844 kernel: Key type asymmetric registered Sep 13 00:19:53.966852 kernel: Asymmetric key parser 'x509' registered Sep 13 00:19:53.966859 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:19:53.966867 kernel: io scheduler mq-deadline registered Sep 13 00:19:53.966875 kernel: io scheduler kyber registered Sep 13 00:19:53.966882 kernel: io scheduler bfq registered Sep 13 00:19:53.966893 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:19:53.966901 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:19:53.966909 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:19:53.966917 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:19:53.966925 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:19:53.966933 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:19:53.966941 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:19:53.966949 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:19:53.966957 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:19:53.966967 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:19:53.967114 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:19:53.967236 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:19:53.967356 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:19:53 UTC (1757722793) Sep 13 00:19:53.967499 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:19:53.967520 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 00:19:53.967536 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:19:53.967554 kernel: Segment Routing with IPv6 Sep 13 00:19:53.967581 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:19:53.967595 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:19:53.967610 kernel: Key type dns_resolver registered Sep 13 00:19:53.967644 kernel: IPI shorthand broadcast: enabled Sep 13 00:19:53.967662 kernel: sched_clock: Marking stable (769002762, 113873071)->(943053823, -60177990) Sep 13 00:19:53.967674 kernel: registered taskstats version 1 Sep 13 00:19:53.967681 kernel: Loading compiled-in X.509 certificates Sep 13 00:19:53.967689 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:19:53.967697 kernel: Key type .fscrypt registered Sep 13 00:19:53.967708 kernel: Key type fscrypt-provisioning registered Sep 13 00:19:53.967716 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:19:53.967723 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:19:53.967731 kernel: ima: No architecture policies found Sep 13 00:19:53.967739 kernel: clk: Disabling unused clocks Sep 13 00:19:53.967747 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:19:53.967755 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:19:53.967762 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:19:53.967772 kernel: Run /init as init process Sep 13 00:19:53.967780 kernel: with arguments: Sep 13 00:19:53.967788 kernel: /init Sep 13 00:19:53.967796 kernel: with environment: Sep 13 00:19:53.967803 kernel: HOME=/ Sep 13 00:19:53.967811 kernel: TERM=linux Sep 13 00:19:53.967818 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:19:53.967829 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:19:53.967839 systemd[1]: Detected virtualization kvm. Sep 13 00:19:53.967850 systemd[1]: Detected architecture x86-64. Sep 13 00:19:53.967858 systemd[1]: Running in initrd. Sep 13 00:19:53.967866 systemd[1]: No hostname configured, using default hostname. Sep 13 00:19:53.967874 systemd[1]: Hostname set to . Sep 13 00:19:53.967883 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:19:53.967891 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:19:53.967899 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:19:53.967908 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:19:53.967920 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:19:53.967953 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:19:53.967965 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:19:53.967983 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:19:53.967994 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:19:53.968007 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:19:53.968015 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:19:53.968024 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:19:53.968032 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:19:53.968041 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:19:53.968049 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:19:53.968057 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:19:53.968066 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:19:53.968077 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:19:53.968086 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:19:53.968094 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:19:53.968103 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:19:53.968111 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:19:53.968120 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:19:53.968128 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:19:53.968137 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:19:53.968148 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:19:53.968157 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:19:53.968165 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:19:53.968174 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:19:53.968182 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:19:53.968190 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:19:53.968199 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:19:53.968207 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:19:53.968216 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:19:53.968227 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:19:53.968264 systemd-journald[193]: Collecting audit messages is disabled. Sep 13 00:19:53.968288 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:19:53.968296 systemd-journald[193]: Journal started Sep 13 00:19:53.968318 systemd-journald[193]: Runtime Journal (/run/log/journal/2931c64962354a009c8051472624cdcb) is 6.0M, max 48.4M, 42.3M free. Sep 13 00:19:53.968025 systemd-modules-load[194]: Inserted module 'overlay' Sep 13 00:19:54.000243 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:19:54.003650 kernel: Bridge firewalling registered Sep 13 00:19:54.003676 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:19:54.003887 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 13 00:19:54.005408 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:19:54.009645 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:19:54.023907 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:19:54.024916 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:19:54.025955 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:19:54.030805 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:19:54.043473 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:19:54.045982 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:19:54.047730 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:19:54.056898 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:19:54.059260 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:19:54.062788 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:19:54.088119 systemd-resolved[227]: Positive Trust Anchors: Sep 13 00:19:54.088133 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:19:54.088164 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:19:54.090791 systemd-resolved[227]: Defaulting to hostname 'linux'. Sep 13 00:19:54.092000 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:19:54.097871 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:19:54.103847 dracut-cmdline[231]: dracut-dracut-053 Sep 13 00:19:54.108497 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:19:54.203665 kernel: SCSI subsystem initialized Sep 13 00:19:54.213647 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:19:54.224664 kernel: iscsi: registered transport (tcp) Sep 13 00:19:54.247651 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:19:54.247691 kernel: QLogic iSCSI HBA Driver Sep 13 00:19:54.308174 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:19:54.323759 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:19:54.352429 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:19:54.352499 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:19:54.352518 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:19:54.397653 kernel: raid6: avx2x4 gen() 30208 MB/s Sep 13 00:19:54.414646 kernel: raid6: avx2x2 gen() 31010 MB/s Sep 13 00:19:54.431678 kernel: raid6: avx2x1 gen() 25737 MB/s Sep 13 00:19:54.431706 kernel: raid6: using algorithm avx2x2 gen() 31010 MB/s Sep 13 00:19:54.449692 kernel: raid6: .... xor() 19812 MB/s, rmw enabled Sep 13 00:19:54.449728 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:19:54.470646 kernel: xor: automatically using best checksumming function avx Sep 13 00:19:54.623653 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:19:54.638940 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:19:54.653848 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:19:54.668106 systemd-udevd[414]: Using default interface naming scheme 'v255'. Sep 13 00:19:54.672785 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:19:54.679763 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:19:54.697150 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Sep 13 00:19:54.732716 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:19:54.753816 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:19:54.823703 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:19:54.832847 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:19:54.845384 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:19:54.849836 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:19:54.852739 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:19:54.855131 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:19:54.866285 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 00:19:54.866562 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:19:54.866795 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:19:54.872736 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:19:54.877849 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:19:54.877880 kernel: GPT:9289727 != 19775487 Sep 13 00:19:54.877891 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:19:54.877901 kernel: GPT:9289727 != 19775487 Sep 13 00:19:54.878817 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:19:54.878836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:19:54.882944 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:19:54.894410 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:19:54.960020 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:19:54.960047 kernel: AES CTR mode by8 optimization enabled Sep 13 00:19:54.894495 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:19:54.952141 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:19:54.952403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:19:54.952474 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:19:54.953095 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:19:54.982639 kernel: libata version 3.00 loaded. Sep 13 00:19:54.982679 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Sep 13 00:19:54.973062 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:19:54.985650 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:19:54.985886 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:19:54.987655 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:19:54.987867 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:19:54.997902 kernel: scsi host0: ahci Sep 13 00:19:54.999050 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (459) Sep 13 00:19:54.999064 kernel: scsi host1: ahci Sep 13 00:19:54.999212 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:19:55.011281 kernel: scsi host2: ahci Sep 13 00:19:55.011586 kernel: scsi host3: ahci Sep 13 00:19:55.013632 kernel: scsi host4: ahci Sep 13 00:19:55.015383 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:19:55.020127 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:19:55.023635 kernel: scsi host5: ahci Sep 13 00:19:55.024020 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:19:55.030182 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 13 00:19:55.030209 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 13 00:19:55.030236 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 13 00:19:55.030251 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 13 00:19:55.030261 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 13 00:19:55.030272 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 13 00:19:55.027331 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:19:55.169410 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:19:55.180682 disk-uuid[559]: Primary Header is updated. Sep 13 00:19:55.180682 disk-uuid[559]: Secondary Entries is updated. Sep 13 00:19:55.180682 disk-uuid[559]: Secondary Header is updated. Sep 13 00:19:55.218268 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:19:55.218353 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:19:55.219338 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:19:55.247094 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:19:55.272466 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:19:55.342673 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:19:55.343673 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:19:55.343767 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:19:55.345116 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:19:55.345130 kernel: ata3.00: applying bridge limits Sep 13 00:19:55.346265 kernel: ata3.00: configured for UDMA/100 Sep 13 00:19:55.346350 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:19:55.347661 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:19:55.399662 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:19:55.399756 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:19:55.445660 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:19:55.445935 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:19:55.459642 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:19:56.190331 disk-uuid[566]: The operation has completed successfully. Sep 13 00:19:56.191744 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:19:56.229122 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:19:56.229262 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:19:56.254900 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:19:56.258022 sh[592]: Success Sep 13 00:19:56.270637 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:19:56.305117 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:19:56.324512 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:19:56.328413 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:19:56.342146 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:19:56.342176 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:19:56.342187 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:19:56.343142 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:19:56.343880 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:19:56.349228 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:19:56.350003 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:19:56.363784 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:19:56.365826 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:19:56.374786 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:19:56.374817 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:19:56.374829 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:19:56.377660 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:19:56.389150 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:19:56.390881 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:19:56.400016 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:19:56.407809 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:19:56.468308 ignition[681]: Ignition 2.19.0 Sep 13 00:19:56.468323 ignition[681]: Stage: fetch-offline Sep 13 00:19:56.468364 ignition[681]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:19:56.468375 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:19:56.468474 ignition[681]: parsed url from cmdline: "" Sep 13 00:19:56.468478 ignition[681]: no config URL provided Sep 13 00:19:56.468484 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:19:56.468495 ignition[681]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:19:56.468527 ignition[681]: op(1): [started] loading QEMU firmware config module Sep 13 00:19:56.468534 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:19:56.478123 ignition[681]: op(1): [finished] loading QEMU firmware config module Sep 13 00:19:56.505240 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:19:56.521258 ignition[681]: parsing config with SHA512: ff9891935ca836dfbacfe996f35967e554210174c18395fd5505dbbb3a05a0e000728209137af0a3a0a80a99b263aae92f8d0efe88502c2296c8528b65ded52c Sep 13 00:19:56.521901 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:19:56.525228 unknown[681]: fetched base config from "system" Sep 13 00:19:56.525241 unknown[681]: fetched user config from "qemu" Sep 13 00:19:56.525598 ignition[681]: fetch-offline: fetch-offline passed Sep 13 00:19:56.525708 ignition[681]: Ignition finished successfully Sep 13 00:19:56.528340 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:19:56.545025 systemd-networkd[781]: lo: Link UP Sep 13 00:19:56.545033 systemd-networkd[781]: lo: Gained carrier Sep 13 00:19:56.546822 systemd-networkd[781]: Enumeration completed Sep 13 00:19:56.546921 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:19:56.547232 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:19:56.547236 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:19:56.548065 systemd-networkd[781]: eth0: Link UP Sep 13 00:19:56.548069 systemd-networkd[781]: eth0: Gained carrier Sep 13 00:19:56.548076 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:19:56.549087 systemd[1]: Reached target network.target - Network. Sep 13 00:19:56.550849 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:19:56.556752 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:19:56.571748 ignition[784]: Ignition 2.19.0 Sep 13 00:19:56.572556 ignition[784]: Stage: kargs Sep 13 00:19:56.572770 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:19:56.572783 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:19:56.575401 ignition[784]: kargs: kargs passed Sep 13 00:19:56.575491 ignition[784]: Ignition finished successfully Sep 13 00:19:56.578182 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:19:56.581697 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:19:56.585806 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:19:56.603729 ignition[792]: Ignition 2.19.0 Sep 13 00:19:56.603742 ignition[792]: Stage: disks Sep 13 00:19:56.603936 ignition[792]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:19:56.603950 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:19:56.604897 ignition[792]: disks: disks passed Sep 13 00:19:56.604948 ignition[792]: Ignition finished successfully Sep 13 00:19:56.610429 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:19:56.612125 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:19:56.614096 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:19:56.615417 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:19:56.617318 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:19:56.619486 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:19:56.633784 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:19:56.649862 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:19:56.657966 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:19:56.673798 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:19:56.761640 kernel: EXT4-fs (vda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:19:56.762101 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:19:56.764311 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:19:56.777704 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:19:56.780374 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:19:56.782708 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:19:56.782761 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:19:56.792075 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Sep 13 00:19:56.792099 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:19:56.792111 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:19:56.792122 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:19:56.782788 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:19:56.794399 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:19:56.796528 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:19:56.797605 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:19:56.813843 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:19:56.846842 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:19:56.852221 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:19:56.857097 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:19:56.862457 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:19:56.952338 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:19:56.964753 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:19:56.968054 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:19:56.972679 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:19:56.993998 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:19:57.076817 ignition[931]: INFO : Ignition 2.19.0 Sep 13 00:19:57.076817 ignition[931]: INFO : Stage: mount Sep 13 00:19:57.078570 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:19:57.078570 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:19:57.081214 ignition[931]: INFO : mount: mount passed Sep 13 00:19:57.081999 ignition[931]: INFO : Ignition finished successfully Sep 13 00:19:57.085091 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:19:57.092741 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:19:57.342455 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:19:57.355825 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:19:57.364988 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Sep 13 00:19:57.365017 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:19:57.365861 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:19:57.365883 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:19:57.369638 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:19:57.371248 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:19:57.394638 ignition[957]: INFO : Ignition 2.19.0 Sep 13 00:19:57.394638 ignition[957]: INFO : Stage: files Sep 13 00:19:57.394638 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:19:57.394638 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:19:57.398824 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:19:57.398824 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:19:57.398824 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:19:57.402876 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:19:57.402876 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:19:57.402876 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:19:57.402876 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:19:57.402876 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 00:19:57.400072 unknown[957]: wrote ssh authorized keys file for user: core Sep 13 00:19:57.586077 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:19:57.702845 systemd-networkd[781]: eth0: Gained IPv6LL Sep 13 00:19:57.887221 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:19:57.887221 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:19:57.891826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:19:57.891826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:19:57.891826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:19:57.891826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:19:57.891826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:19:57.891826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:19:57.891826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:19:57.891826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:19:57.891826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:19:57.891826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:19:57.891826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:19:57.891826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:19:57.891826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 00:19:58.180160 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:19:58.562767 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:19:58.562767 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:19:58.566398 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:19:58.568488 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:19:58.568488 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:19:58.568488 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 00:19:58.572651 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:19:58.574530 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:19:58.574530 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 00:19:58.577526 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:19:58.599634 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:19:58.605591 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:19:58.607245 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:19:58.607245 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:19:58.609953 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:19:58.611399 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:19:58.613121 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:19:58.614750 ignition[957]: INFO : files: files passed Sep 13 00:19:58.615497 ignition[957]: INFO : Ignition finished successfully Sep 13 00:19:58.618845 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:19:58.642755 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:19:58.644531 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:19:58.651897 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:19:58.652059 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:19:58.658727 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 00:19:58.662037 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:19:58.662037 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:19:58.665533 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:19:58.664958 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:19:58.667291 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:19:58.673881 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:19:58.701163 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:19:58.701310 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:19:58.702565 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:19:58.705923 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:19:58.707935 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:19:58.708817 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:19:58.728131 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:19:58.749781 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:19:58.760108 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:19:58.761413 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:19:58.763738 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:19:58.765794 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:19:58.765911 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:19:58.768236 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:19:58.769773 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:19:58.771811 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:19:58.773896 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:19:58.776044 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:19:58.778176 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:19:58.780610 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:19:58.783430 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:19:58.785768 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:19:58.788509 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:19:58.790467 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:19:58.790643 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:19:58.793342 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:19:58.794871 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:19:58.796914 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:19:58.797061 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:19:58.799071 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:19:58.799193 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:19:58.801503 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:19:58.801643 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:19:58.803479 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:19:58.805154 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:19:58.809787 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:19:58.811602 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:19:58.813575 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:19:58.815443 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:19:58.815610 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:19:58.817573 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:19:58.817689 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:19:58.820051 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:19:58.820181 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:19:58.822211 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:19:58.822371 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:19:58.834843 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:19:58.836835 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:19:58.837773 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:19:58.837966 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:19:58.840073 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:19:58.840375 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:19:58.847703 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:19:58.848007 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:19:58.852641 ignition[1011]: INFO : Ignition 2.19.0 Sep 13 00:19:58.852641 ignition[1011]: INFO : Stage: umount Sep 13 00:19:58.852641 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:19:58.852641 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:19:58.852641 ignition[1011]: INFO : umount: umount passed Sep 13 00:19:58.852641 ignition[1011]: INFO : Ignition finished successfully Sep 13 00:19:58.855203 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:19:58.855377 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:19:58.858301 systemd[1]: Stopped target network.target - Network. Sep 13 00:19:58.859742 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:19:58.859821 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:19:58.862561 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:19:58.862651 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:19:58.863647 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:19:58.863712 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:19:58.864110 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:19:58.864174 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:19:58.864646 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:19:58.869599 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:19:58.871717 systemd-networkd[781]: eth0: DHCPv6 lease lost Sep 13 00:19:58.874012 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:19:58.874201 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:19:58.875401 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:19:58.875461 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:19:58.883789 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:19:58.886662 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:19:58.886737 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:19:58.889151 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:19:58.892573 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:19:58.892720 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:19:58.902568 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:19:58.902707 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:19:58.903896 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:19:58.903948 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:19:58.906236 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:19:58.906298 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:19:58.909892 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:19:58.911740 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:19:58.911963 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:19:58.915237 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:19:58.915369 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:19:58.917331 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:19:58.917443 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:19:58.918654 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:19:58.918706 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:19:58.919024 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:19:58.919093 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:19:58.919721 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:19:58.919778 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:19:58.920451 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:19:58.920503 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:19:58.932805 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:19:58.934671 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:19:58.934733 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:19:58.937204 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:19:58.937256 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:19:58.939505 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:19:58.939556 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:19:58.940819 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:19:58.940870 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:19:58.943376 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:19:58.943488 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:19:59.116517 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:19:59.116706 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:19:59.119231 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:19:59.120403 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:19:59.120492 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:19:59.140007 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:19:59.147486 systemd[1]: Switching root. Sep 13 00:19:59.176580 systemd-journald[193]: Journal stopped Sep 13 00:20:00.534455 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 13 00:20:00.534635 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:20:00.534671 kernel: SELinux: policy capability open_perms=1 Sep 13 00:20:00.534684 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:20:00.534698 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:20:00.534709 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:20:00.534739 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:20:00.534757 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:20:00.534773 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:20:00.534784 kernel: audit: type=1403 audit(1757722799.617:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:20:00.534808 systemd[1]: Successfully loaded SELinux policy in 41.248ms. Sep 13 00:20:00.534842 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.832ms. Sep 13 00:20:00.534863 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:20:00.534877 systemd[1]: Detected virtualization kvm. Sep 13 00:20:00.534889 systemd[1]: Detected architecture x86-64. Sep 13 00:20:00.534901 systemd[1]: Detected first boot. Sep 13 00:20:00.534913 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:20:00.534936 zram_generator::config[1060]: No configuration found. Sep 13 00:20:00.534952 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:20:00.534975 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:20:00.534987 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:20:00.534999 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:20:00.535030 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:20:00.535051 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:20:00.535067 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:20:00.535082 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:20:00.535102 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:20:00.535155 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:20:00.535191 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:20:00.535204 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:20:00.535219 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:20:00.535232 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:20:00.535244 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:20:00.535257 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:20:00.535269 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:20:00.535286 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:20:00.535309 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:20:00.535327 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:20:00.535341 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:20:00.535355 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:20:00.535370 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:20:00.535382 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:20:00.535395 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:20:00.535407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:20:00.535427 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:20:00.535439 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:20:00.535452 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:20:00.535465 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:20:00.535480 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:20:00.535495 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:20:00.535507 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:20:00.535519 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:20:00.535531 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:20:00.535544 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:20:00.535562 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:20:00.535574 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:20:00.535589 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:20:00.535603 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:20:00.535795 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:20:00.535810 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:20:00.535827 systemd[1]: Reached target machines.target - Containers. Sep 13 00:20:00.535853 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:20:00.535875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:20:00.535887 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:20:00.535903 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:20:00.535920 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:20:00.535932 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:20:00.535947 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:20:00.535959 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:20:00.535979 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:20:00.535992 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:20:00.536011 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:20:00.536024 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:20:00.536036 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:20:00.536048 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:20:00.536060 kernel: fuse: init (API version 7.39) Sep 13 00:20:00.536071 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:20:00.536084 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:20:00.536095 kernel: loop: module loaded Sep 13 00:20:00.536107 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:20:00.536148 systemd-journald[1127]: Collecting audit messages is disabled. Sep 13 00:20:00.536187 kernel: ACPI: bus type drm_connector registered Sep 13 00:20:00.536200 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:20:00.536218 systemd-journald[1127]: Journal started Sep 13 00:20:00.536240 systemd-journald[1127]: Runtime Journal (/run/log/journal/2931c64962354a009c8051472624cdcb) is 6.0M, max 48.4M, 42.3M free. Sep 13 00:20:00.200268 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:20:00.221866 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:20:00.222524 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:20:00.539673 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:20:00.542095 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:20:00.542127 systemd[1]: Stopped verity-setup.service. Sep 13 00:20:00.544101 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:20:00.549651 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:20:00.551452 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:20:00.552720 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:20:00.554043 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:20:00.555217 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:20:00.556972 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:20:00.559234 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:20:00.561388 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:20:00.562935 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:20:00.564541 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:20:00.564854 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:20:00.566374 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:20:00.566561 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:20:00.568057 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:20:00.568250 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:20:00.569678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:20:00.569902 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:20:00.571456 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:20:00.571657 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:20:00.573101 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:20:00.573311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:20:00.574916 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:20:00.577141 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:20:00.578912 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:20:00.595479 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:20:00.602738 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:20:00.605052 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:20:00.606189 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:20:00.606219 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:20:00.608346 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:20:00.612517 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:20:00.615191 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:20:00.616740 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:20:00.618599 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:20:00.622531 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:20:00.623712 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:20:00.628014 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:20:00.629782 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:20:00.634037 systemd-journald[1127]: Time spent on flushing to /var/log/journal/2931c64962354a009c8051472624cdcb is 17.683ms for 949 entries. Sep 13 00:20:00.634037 systemd-journald[1127]: System Journal (/var/log/journal/2931c64962354a009c8051472624cdcb) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:20:00.671583 systemd-journald[1127]: Received client request to flush runtime journal. Sep 13 00:20:00.671644 kernel: loop0: detected capacity change from 0 to 140768 Sep 13 00:20:00.634728 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:20:00.638565 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:20:00.648122 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:20:00.651717 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:20:00.654093 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:20:00.656754 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:20:00.667462 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:20:00.670115 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:20:00.676133 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:20:00.682048 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:20:00.692365 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:20:00.702641 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:20:00.697747 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:20:00.699668 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:20:00.720543 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Sep 13 00:20:00.721114 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Sep 13 00:20:00.722778 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:20:00.724093 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:20:00.729383 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:20:00.732060 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:20:00.733668 kernel: loop1: detected capacity change from 0 to 229808 Sep 13 00:20:00.740817 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:20:00.779742 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:20:00.782753 kernel: loop2: detected capacity change from 0 to 142488 Sep 13 00:20:00.797789 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:20:00.818517 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Sep 13 00:20:00.818542 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Sep 13 00:20:00.824330 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:20:00.829665 kernel: loop3: detected capacity change from 0 to 140768 Sep 13 00:20:00.843651 kernel: loop4: detected capacity change from 0 to 229808 Sep 13 00:20:00.852665 kernel: loop5: detected capacity change from 0 to 142488 Sep 13 00:20:00.863153 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 00:20:00.863817 (sd-merge)[1198]: Merged extensions into '/usr'. Sep 13 00:20:00.867843 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:20:00.867871 systemd[1]: Reloading... Sep 13 00:20:00.940658 zram_generator::config[1225]: No configuration found. Sep 13 00:20:01.010108 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:20:01.071587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:20:01.123426 systemd[1]: Reloading finished in 255 ms. Sep 13 00:20:01.167261 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:20:01.168846 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:20:01.191885 systemd[1]: Starting ensure-sysext.service... Sep 13 00:20:01.194292 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:20:01.221595 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:20:01.222044 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:20:01.223096 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:20:01.223422 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Sep 13 00:20:01.223510 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Sep 13 00:20:01.227720 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:20:01.227733 systemd-tmpfiles[1263]: Skipping /boot Sep 13 00:20:01.229819 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:20:01.229836 systemd[1]: Reloading... Sep 13 00:20:01.239262 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:20:01.239287 systemd-tmpfiles[1263]: Skipping /boot Sep 13 00:20:01.288740 zram_generator::config[1290]: No configuration found. Sep 13 00:20:01.399910 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:20:01.454250 systemd[1]: Reloading finished in 224 ms. Sep 13 00:20:01.480034 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:20:01.497453 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:20:01.509017 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:20:01.511789 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:20:01.514346 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:20:01.519099 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:20:01.525841 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:20:01.529024 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:20:01.534003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:20:01.534193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:20:01.536860 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:20:01.542873 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:20:01.547911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:20:01.549442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:20:01.553058 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:20:01.554150 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:20:01.555326 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:20:01.555547 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:20:01.557799 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:20:01.557983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:20:01.559681 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:20:01.559923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:20:01.566667 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:20:01.573632 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:20:01.573820 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:20:01.576519 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Sep 13 00:20:01.579194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:20:01.584859 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:20:01.588728 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:20:01.589836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:20:01.592855 augenrules[1360]: No rules Sep 13 00:20:01.593997 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:20:01.595191 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:20:01.596378 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:20:01.598394 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:20:01.600241 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:20:01.600459 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:20:01.602299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:20:01.603047 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:20:01.605204 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:20:01.605414 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:20:01.608678 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:20:01.621358 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:20:01.623552 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:20:01.630522 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:20:01.630813 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:20:01.639785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:20:01.642879 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:20:01.647531 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:20:01.650800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:20:01.652345 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:20:01.654877 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:20:01.655921 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:20:01.657689 systemd[1]: Finished ensure-sysext.service. Sep 13 00:20:01.658123 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:20:01.665041 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:20:01.665256 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:20:01.666863 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:20:01.667051 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:20:01.674198 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:20:01.675446 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:20:01.677136 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:20:01.677375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:20:01.703291 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1398) Sep 13 00:20:01.697980 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:20:01.701101 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:20:01.701161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:20:01.709456 systemd-resolved[1333]: Positive Trust Anchors: Sep 13 00:20:01.709667 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:20:01.709699 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:20:01.711783 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:20:01.713256 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:20:01.714035 systemd-resolved[1333]: Defaulting to hostname 'linux'. Sep 13 00:20:01.715845 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:20:01.717351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:20:01.755683 systemd-networkd[1399]: lo: Link UP Sep 13 00:20:01.755696 systemd-networkd[1399]: lo: Gained carrier Sep 13 00:20:01.757434 systemd-networkd[1399]: Enumeration completed Sep 13 00:20:01.757978 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:20:01.757982 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:20:01.758458 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:20:01.760121 systemd-networkd[1399]: eth0: Link UP Sep 13 00:20:01.760133 systemd-networkd[1399]: eth0: Gained carrier Sep 13 00:20:01.760171 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:20:01.760606 systemd[1]: Reached target network.target - Network. Sep 13 00:20:01.766965 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:20:01.768946 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:20:01.773683 systemd-networkd[1399]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:20:01.780685 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:20:01.788749 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:20:01.789660 systemd-timesyncd[1409]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:20:01.789968 systemd-timesyncd[1409]: Initial clock synchronization to Sat 2025-09-13 00:20:01.583016 UTC. Sep 13 00:20:01.792177 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:20:01.801833 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:20:01.808926 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:20:01.814648 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:20:01.832211 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:20:01.837885 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:20:01.838044 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:20:01.843646 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:20:01.848642 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:20:01.920805 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:20:01.921124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:20:01.936708 kernel: kvm_amd: TSC scaling supported Sep 13 00:20:01.936752 kernel: kvm_amd: Nested Virtualization enabled Sep 13 00:20:01.936783 kernel: kvm_amd: Nested Paging enabled Sep 13 00:20:01.937652 kernel: kvm_amd: LBR virtualization supported Sep 13 00:20:01.937669 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 00:20:01.938689 kernel: kvm_amd: Virtual GIF supported Sep 13 00:20:01.960645 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:20:01.996106 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:20:02.018738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:20:02.028893 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:20:02.037555 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:20:02.078661 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:20:02.080525 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:20:02.081873 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:20:02.083300 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:20:02.084836 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:20:02.086839 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:20:02.088274 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:20:02.089847 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:20:02.091338 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:20:02.091382 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:20:02.092489 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:20:02.094818 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:20:02.098227 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:20:02.110187 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:20:02.113154 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:20:02.114900 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:20:02.116120 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:20:02.117170 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:20:02.118219 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:20:02.118257 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:20:02.119551 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:20:02.121890 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:20:02.124721 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:20:02.126879 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:20:02.128964 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:20:02.130053 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:20:02.132827 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:20:02.137867 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:20:02.143657 jq[1440]: false Sep 13 00:20:02.144786 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:20:02.147160 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:20:02.152121 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:20:02.153613 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:20:02.154101 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:20:02.156831 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:20:02.159583 dbus-daemon[1439]: [system] SELinux support is enabled Sep 13 00:20:02.161302 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:20:02.165508 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:20:02.167550 extend-filesystems[1441]: Found loop3 Sep 13 00:20:02.168459 extend-filesystems[1441]: Found loop4 Sep 13 00:20:02.168459 extend-filesystems[1441]: Found loop5 Sep 13 00:20:02.168459 extend-filesystems[1441]: Found sr0 Sep 13 00:20:02.168459 extend-filesystems[1441]: Found vda Sep 13 00:20:02.168459 extend-filesystems[1441]: Found vda1 Sep 13 00:20:02.168459 extend-filesystems[1441]: Found vda2 Sep 13 00:20:02.168459 extend-filesystems[1441]: Found vda3 Sep 13 00:20:02.168459 extend-filesystems[1441]: Found usr Sep 13 00:20:02.168459 extend-filesystems[1441]: Found vda4 Sep 13 00:20:02.168459 extend-filesystems[1441]: Found vda6 Sep 13 00:20:02.168459 extend-filesystems[1441]: Found vda7 Sep 13 00:20:02.168459 extend-filesystems[1441]: Found vda9 Sep 13 00:20:02.168459 extend-filesystems[1441]: Checking size of /dev/vda9 Sep 13 00:20:02.192110 extend-filesystems[1441]: Resized partition /dev/vda9 Sep 13 00:20:02.172467 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:20:02.194841 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:20:02.195117 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:20:02.195491 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:20:02.195727 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:20:02.199325 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:20:02.199677 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1398) Sep 13 00:20:02.199550 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:20:02.203405 jq[1454]: true Sep 13 00:20:02.212665 extend-filesystems[1461]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:20:02.220877 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:20:02.238463 jq[1466]: true Sep 13 00:20:02.240102 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:20:02.241952 update_engine[1453]: I20250913 00:20:02.239431 1453 main.cc:92] Flatcar Update Engine starting Sep 13 00:20:02.241222 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:20:02.241249 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:20:02.243808 systemd-logind[1449]: New seat seat0. Sep 13 00:20:02.244769 update_engine[1453]: I20250913 00:20:02.244637 1453 update_check_scheduler.cc:74] Next update check in 6m29s Sep 13 00:20:02.245600 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:20:02.250674 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:20:02.279889 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:20:02.279889 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:20:02.279889 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:20:02.254353 dbus-daemon[1439]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:20:02.263959 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:20:02.288220 tar[1463]: linux-amd64/LICENSE Sep 13 00:20:02.288220 tar[1463]: linux-amd64/helm Sep 13 00:20:02.288655 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Sep 13 00:20:02.266833 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:20:02.267019 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:20:02.270797 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:20:02.270972 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:20:02.279948 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:20:02.289716 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:20:02.292354 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:20:02.325273 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:20:02.326380 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:20:02.328634 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:20:02.330926 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:20:02.403718 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:20:02.429804 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:20:02.441080 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:20:02.447569 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:20:02.447916 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:20:02.451878 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:20:02.529143 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:20:02.543082 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:20:02.546652 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:20:02.548177 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:20:02.571711 containerd[1470]: time="2025-09-13T00:20:02.571566759Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:20:02.593777 containerd[1470]: time="2025-09-13T00:20:02.593550415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:20:02.596417 containerd[1470]: time="2025-09-13T00:20:02.596289660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:20:02.596417 containerd[1470]: time="2025-09-13T00:20:02.596340569Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:20:02.596417 containerd[1470]: time="2025-09-13T00:20:02.596361397Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:20:02.596595 containerd[1470]: time="2025-09-13T00:20:02.596569796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:20:02.596595 containerd[1470]: time="2025-09-13T00:20:02.596592049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:20:02.596709 containerd[1470]: time="2025-09-13T00:20:02.596688323Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:20:02.596709 containerd[1470]: time="2025-09-13T00:20:02.596702866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:20:02.596934 containerd[1470]: time="2025-09-13T00:20:02.596905116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:20:02.596934 containerd[1470]: time="2025-09-13T00:20:02.596924031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:20:02.597002 containerd[1470]: time="2025-09-13T00:20:02.596938916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:20:02.597002 containerd[1470]: time="2025-09-13T00:20:02.596950296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:20:02.597054 containerd[1470]: time="2025-09-13T00:20:02.597043896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:20:02.597317 containerd[1470]: time="2025-09-13T00:20:02.597281946Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:20:02.597441 containerd[1470]: time="2025-09-13T00:20:02.597406798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:20:02.597441 containerd[1470]: time="2025-09-13T00:20:02.597430886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:20:02.597548 containerd[1470]: time="2025-09-13T00:20:02.597525413Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:20:02.597605 containerd[1470]: time="2025-09-13T00:20:02.597580812Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:20:02.604356 containerd[1470]: time="2025-09-13T00:20:02.603924453Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:20:02.604356 containerd[1470]: time="2025-09-13T00:20:02.603998875Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:20:02.604356 containerd[1470]: time="2025-09-13T00:20:02.604016082Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:20:02.604356 containerd[1470]: time="2025-09-13T00:20:02.604031737Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:20:02.604356 containerd[1470]: time="2025-09-13T00:20:02.604046319Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:20:02.604356 containerd[1470]: time="2025-09-13T00:20:02.604253459Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:20:02.604549 containerd[1470]: time="2025-09-13T00:20:02.604508716Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:20:02.604690 containerd[1470]: time="2025-09-13T00:20:02.604660252Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:20:02.604690 containerd[1470]: time="2025-09-13T00:20:02.604680641Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:20:02.604737 containerd[1470]: time="2025-09-13T00:20:02.604692694Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:20:02.604737 containerd[1470]: time="2025-09-13T00:20:02.604706593Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:20:02.604737 containerd[1470]: time="2025-09-13T00:20:02.604718549Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:20:02.604737 containerd[1470]: time="2025-09-13T00:20:02.604729930Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:20:02.604814 containerd[1470]: time="2025-09-13T00:20:02.604745077Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:20:02.604814 containerd[1470]: time="2025-09-13T00:20:02.604760030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:20:02.604814 containerd[1470]: time="2025-09-13T00:20:02.604774085Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:20:02.604814 containerd[1470]: time="2025-09-13T00:20:02.604785953Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:20:02.604814 containerd[1470]: time="2025-09-13T00:20:02.604797109Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:20:02.604895 containerd[1470]: time="2025-09-13T00:20:02.604829074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.604895 containerd[1470]: time="2025-09-13T00:20:02.604842708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.604895 containerd[1470]: time="2025-09-13T00:20:02.604854313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.604895 containerd[1470]: time="2025-09-13T00:20:02.604866358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.604895 containerd[1470]: time="2025-09-13T00:20:02.604878079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.604895 containerd[1470]: time="2025-09-13T00:20:02.604890846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.605009 containerd[1470]: time="2025-09-13T00:20:02.604901923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.605009 containerd[1470]: time="2025-09-13T00:20:02.604915636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.605009 containerd[1470]: time="2025-09-13T00:20:02.604928198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.605009 containerd[1470]: time="2025-09-13T00:20:02.604942399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.605009 containerd[1470]: time="2025-09-13T00:20:02.604953662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.605009 containerd[1470]: time="2025-09-13T00:20:02.604965755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.605009 containerd[1470]: time="2025-09-13T00:20:02.604978179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.605009 containerd[1470]: time="2025-09-13T00:20:02.604994459Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:20:02.605145 containerd[1470]: time="2025-09-13T00:20:02.605014136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.605145 containerd[1470]: time="2025-09-13T00:20:02.605026512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.605145 containerd[1470]: time="2025-09-13T00:20:02.605037462Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:20:02.605145 containerd[1470]: time="2025-09-13T00:20:02.605089084Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:20:02.605145 containerd[1470]: time="2025-09-13T00:20:02.605106331Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:20:02.605145 containerd[1470]: time="2025-09-13T00:20:02.605118559Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:20:02.605145 containerd[1470]: time="2025-09-13T00:20:02.605129403Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:20:02.605145 containerd[1470]: time="2025-09-13T00:20:02.605141301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.605298 containerd[1470]: time="2025-09-13T00:20:02.605152915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:20:02.605298 containerd[1470]: time="2025-09-13T00:20:02.605163222Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:20:02.605298 containerd[1470]: time="2025-09-13T00:20:02.605173871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:20:02.605920 containerd[1470]: time="2025-09-13T00:20:02.605609515Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:20:02.606062 containerd[1470]: time="2025-09-13T00:20:02.605965994Z" level=info msg="Connect containerd service" Sep 13 00:20:02.606642 containerd[1470]: time="2025-09-13T00:20:02.606584505Z" level=info msg="using legacy CRI server" Sep 13 00:20:02.606803 containerd[1470]: time="2025-09-13T00:20:02.606763165Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:20:02.606915 containerd[1470]: time="2025-09-13T00:20:02.606890359Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:20:02.607739 containerd[1470]: time="2025-09-13T00:20:02.607706250Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:20:02.608085 containerd[1470]: time="2025-09-13T00:20:02.608030932Z" level=info msg="Start subscribing containerd event" Sep 13 00:20:02.608126 containerd[1470]: time="2025-09-13T00:20:02.608103157Z" level=info msg="Start recovering state" Sep 13 00:20:02.608267 containerd[1470]: time="2025-09-13T00:20:02.608232683Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:20:02.608350 containerd[1470]: time="2025-09-13T00:20:02.608236158Z" level=info msg="Start event monitor" Sep 13 00:20:02.608350 containerd[1470]: time="2025-09-13T00:20:02.608294162Z" level=info msg="Start snapshots syncer" Sep 13 00:20:02.608350 containerd[1470]: time="2025-09-13T00:20:02.608308051Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:20:02.608350 containerd[1470]: time="2025-09-13T00:20:02.608318075Z" level=info msg="Start streaming server" Sep 13 00:20:02.608501 containerd[1470]: time="2025-09-13T00:20:02.608321120Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:20:02.608583 containerd[1470]: time="2025-09-13T00:20:02.608562401Z" level=info msg="containerd successfully booted in 0.061371s" Sep 13 00:20:02.608751 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:20:02.793853 tar[1463]: linux-amd64/README.md Sep 13 00:20:02.808558 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:20:03.782867 systemd-networkd[1399]: eth0: Gained IPv6LL Sep 13 00:20:03.786311 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:20:03.788469 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:20:03.800007 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 00:20:03.802803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:20:03.805086 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:20:03.826932 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:20:03.827198 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 00:20:03.828975 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:20:03.831395 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:20:05.159194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:20:05.161195 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:20:05.162464 systemd[1]: Startup finished in 939ms (kernel) + 5.879s (initrd) + 5.585s (userspace) = 12.404s. Sep 13 00:20:05.174980 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:20:05.993979 kubelet[1552]: E0913 00:20:05.993900 1552 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:20:05.999032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:20:05.999376 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:20:05.999913 systemd[1]: kubelet.service: Consumed 2.085s CPU time. Sep 13 00:20:06.528363 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:20:06.529844 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:55586.service - OpenSSH per-connection server daemon (10.0.0.1:55586). Sep 13 00:20:06.574374 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 55586 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:20:06.576400 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:20:06.585204 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:20:06.595851 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:20:06.597836 systemd-logind[1449]: New session 1 of user core. Sep 13 00:20:06.608886 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:20:06.611974 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:20:06.621396 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:20:06.747452 systemd[1569]: Queued start job for default target default.target. Sep 13 00:20:06.761035 systemd[1569]: Created slice app.slice - User Application Slice. Sep 13 00:20:06.761063 systemd[1569]: Reached target paths.target - Paths. Sep 13 00:20:06.761077 systemd[1569]: Reached target timers.target - Timers. Sep 13 00:20:06.762869 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:20:06.777987 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:20:06.778131 systemd[1569]: Reached target sockets.target - Sockets. Sep 13 00:20:06.778150 systemd[1569]: Reached target basic.target - Basic System. Sep 13 00:20:06.778193 systemd[1569]: Reached target default.target - Main User Target. Sep 13 00:20:06.778228 systemd[1569]: Startup finished in 148ms. Sep 13 00:20:06.778692 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:20:06.780668 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:20:06.846456 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:55590.service - OpenSSH per-connection server daemon (10.0.0.1:55590). Sep 13 00:20:06.884643 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 55590 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:20:06.886635 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:20:06.890971 systemd-logind[1449]: New session 2 of user core. Sep 13 00:20:06.904888 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:20:06.960289 sshd[1580]: pam_unix(sshd:session): session closed for user core Sep 13 00:20:06.971704 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:55590.service: Deactivated successfully. Sep 13 00:20:06.973670 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:20:06.975377 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:20:06.986875 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:55602.service - OpenSSH per-connection server daemon (10.0.0.1:55602). Sep 13 00:20:06.987756 systemd-logind[1449]: Removed session 2. Sep 13 00:20:07.014866 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 55602 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:20:07.016346 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:20:07.020562 systemd-logind[1449]: New session 3 of user core. Sep 13 00:20:07.031748 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:20:07.082273 sshd[1587]: pam_unix(sshd:session): session closed for user core Sep 13 00:20:07.094831 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:55602.service: Deactivated successfully. Sep 13 00:20:07.097176 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:20:07.098778 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:20:07.113897 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:55618.service - OpenSSH per-connection server daemon (10.0.0.1:55618). Sep 13 00:20:07.114882 systemd-logind[1449]: Removed session 3. Sep 13 00:20:07.145057 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 55618 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:20:07.146787 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:20:07.150785 systemd-logind[1449]: New session 4 of user core. Sep 13 00:20:07.159735 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:20:07.213833 sshd[1594]: pam_unix(sshd:session): session closed for user core Sep 13 00:20:07.228447 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:55618.service: Deactivated successfully. Sep 13 00:20:07.230146 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:20:07.231826 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:20:07.241845 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:55626.service - OpenSSH per-connection server daemon (10.0.0.1:55626). Sep 13 00:20:07.242744 systemd-logind[1449]: Removed session 4. Sep 13 00:20:07.269774 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 55626 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:20:07.271526 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:20:07.275645 systemd-logind[1449]: New session 5 of user core. Sep 13 00:20:07.285757 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:20:07.347360 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:20:07.347760 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:20:07.363956 sudo[1604]: pam_unix(sudo:session): session closed for user root Sep 13 00:20:07.365854 sshd[1601]: pam_unix(sshd:session): session closed for user core Sep 13 00:20:07.373518 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:55626.service: Deactivated successfully. Sep 13 00:20:07.375498 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:20:07.377153 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:20:07.383963 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:55632.service - OpenSSH per-connection server daemon (10.0.0.1:55632). Sep 13 00:20:07.384999 systemd-logind[1449]: Removed session 5. Sep 13 00:20:07.411733 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 55632 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:20:07.413233 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:20:07.417511 systemd-logind[1449]: New session 6 of user core. Sep 13 00:20:07.426842 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:20:07.480332 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:20:07.480713 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:20:07.484482 sudo[1613]: pam_unix(sudo:session): session closed for user root Sep 13 00:20:07.491115 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:20:07.491448 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:20:07.506894 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:20:07.509301 auditctl[1616]: No rules Sep 13 00:20:07.510626 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:20:07.510928 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:20:07.512881 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:20:07.545369 augenrules[1634]: No rules Sep 13 00:20:07.546412 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:20:07.547670 sudo[1612]: pam_unix(sudo:session): session closed for user root Sep 13 00:20:07.549596 sshd[1609]: pam_unix(sshd:session): session closed for user core Sep 13 00:20:07.560436 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:55632.service: Deactivated successfully. Sep 13 00:20:07.562108 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:20:07.563840 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:20:07.574861 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:55642.service - OpenSSH per-connection server daemon (10.0.0.1:55642). Sep 13 00:20:07.575792 systemd-logind[1449]: Removed session 6. Sep 13 00:20:07.605007 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 55642 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:20:07.606868 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:20:07.611669 systemd-logind[1449]: New session 7 of user core. Sep 13 00:20:07.628852 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:20:07.683724 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:20:07.684064 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:20:08.306833 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:20:08.307114 (dockerd)[1663]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:20:08.588902 dockerd[1663]: time="2025-09-13T00:20:08.588726105Z" level=info msg="Starting up" Sep 13 00:20:09.177324 dockerd[1663]: time="2025-09-13T00:20:09.177256476Z" level=info msg="Loading containers: start." Sep 13 00:20:09.289648 kernel: Initializing XFRM netlink socket Sep 13 00:20:09.370224 systemd-networkd[1399]: docker0: Link UP Sep 13 00:20:09.398244 dockerd[1663]: time="2025-09-13T00:20:09.398199953Z" level=info msg="Loading containers: done." Sep 13 00:20:09.412317 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2350976941-merged.mount: Deactivated successfully. Sep 13 00:20:09.415059 dockerd[1663]: time="2025-09-13T00:20:09.415006157Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:20:09.415156 dockerd[1663]: time="2025-09-13T00:20:09.415134477Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:20:09.415297 dockerd[1663]: time="2025-09-13T00:20:09.415272623Z" level=info msg="Daemon has completed initialization" Sep 13 00:20:09.454372 dockerd[1663]: time="2025-09-13T00:20:09.454092734Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:20:09.454489 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:20:10.458977 containerd[1470]: time="2025-09-13T00:20:10.458921284Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 00:20:11.039656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount25860914.mount: Deactivated successfully. Sep 13 00:20:12.572884 containerd[1470]: time="2025-09-13T00:20:12.572799382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:12.573525 containerd[1470]: time="2025-09-13T00:20:12.573455953Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 13 00:20:12.574929 containerd[1470]: time="2025-09-13T00:20:12.574867157Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:12.578201 containerd[1470]: time="2025-09-13T00:20:12.578138526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:12.579989 containerd[1470]: time="2025-09-13T00:20:12.579926825Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.120961674s" Sep 13 00:20:12.579989 containerd[1470]: time="2025-09-13T00:20:12.579976339Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 00:20:12.580741 containerd[1470]: time="2025-09-13T00:20:12.580705013Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 00:20:14.861808 containerd[1470]: time="2025-09-13T00:20:14.861732578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:14.862700 containerd[1470]: time="2025-09-13T00:20:14.862642377Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 13 00:20:14.863875 containerd[1470]: time="2025-09-13T00:20:14.863836417Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:14.869635 containerd[1470]: time="2025-09-13T00:20:14.869539949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:14.870893 containerd[1470]: time="2025-09-13T00:20:14.870853639Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.290104725s" Sep 13 00:20:14.870893 containerd[1470]: time="2025-09-13T00:20:14.870890327Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 00:20:14.871513 containerd[1470]: time="2025-09-13T00:20:14.871469541Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 00:20:16.228908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:20:16.364934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:20:16.617220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:20:16.623343 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:20:16.938760 containerd[1470]: time="2025-09-13T00:20:16.938579582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:16.939881 containerd[1470]: time="2025-09-13T00:20:16.939527607Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 13 00:20:16.941138 containerd[1470]: time="2025-09-13T00:20:16.941074556Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:16.944116 containerd[1470]: time="2025-09-13T00:20:16.944078536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:16.946567 containerd[1470]: time="2025-09-13T00:20:16.946269315Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.074742414s" Sep 13 00:20:16.946567 containerd[1470]: time="2025-09-13T00:20:16.946330745Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 00:20:16.947162 containerd[1470]: time="2025-09-13T00:20:16.947122641Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 00:20:16.953126 kubelet[1886]: E0913 00:20:16.953073 1886 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:20:16.961465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:20:16.961738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:20:18.654945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4145803107.mount: Deactivated successfully. Sep 13 00:20:18.942178 containerd[1470]: time="2025-09-13T00:20:18.942031640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:18.942961 containerd[1470]: time="2025-09-13T00:20:18.942924341Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 13 00:20:18.944198 containerd[1470]: time="2025-09-13T00:20:18.944156262Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:18.946257 containerd[1470]: time="2025-09-13T00:20:18.946214281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:18.946840 containerd[1470]: time="2025-09-13T00:20:18.946787048Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.999622267s" Sep 13 00:20:18.946899 containerd[1470]: time="2025-09-13T00:20:18.946845918Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 00:20:18.947361 containerd[1470]: time="2025-09-13T00:20:18.947328331Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 00:20:19.802571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1062671374.mount: Deactivated successfully. Sep 13 00:20:22.829871 containerd[1470]: time="2025-09-13T00:20:22.829781957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:22.831228 containerd[1470]: time="2025-09-13T00:20:22.831183582Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 13 00:20:22.833030 containerd[1470]: time="2025-09-13T00:20:22.832992773Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:22.839272 containerd[1470]: time="2025-09-13T00:20:22.839209015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:22.840718 containerd[1470]: time="2025-09-13T00:20:22.840672345Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.893311329s" Sep 13 00:20:22.840718 containerd[1470]: time="2025-09-13T00:20:22.840710208Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 00:20:22.841370 containerd[1470]: time="2025-09-13T00:20:22.841337084Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:20:23.352262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2181923630.mount: Deactivated successfully. Sep 13 00:20:23.359388 containerd[1470]: time="2025-09-13T00:20:23.359310403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:23.360285 containerd[1470]: time="2025-09-13T00:20:23.360199091Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:20:23.361511 containerd[1470]: time="2025-09-13T00:20:23.361466027Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:23.363978 containerd[1470]: time="2025-09-13T00:20:23.363931967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:23.364724 containerd[1470]: time="2025-09-13T00:20:23.364681591Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 523.299342ms" Sep 13 00:20:23.364724 containerd[1470]: time="2025-09-13T00:20:23.364718554Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:20:23.365321 containerd[1470]: time="2025-09-13T00:20:23.365290213Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 00:20:24.059109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983778013.mount: Deactivated successfully. Sep 13 00:20:26.821330 containerd[1470]: time="2025-09-13T00:20:26.821242712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:26.821929 containerd[1470]: time="2025-09-13T00:20:26.821840858Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 13 00:20:26.823261 containerd[1470]: time="2025-09-13T00:20:26.823232261Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:26.826423 containerd[1470]: time="2025-09-13T00:20:26.826356762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:26.827564 containerd[1470]: time="2025-09-13T00:20:26.827491462Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.462156815s" Sep 13 00:20:26.827564 containerd[1470]: time="2025-09-13T00:20:26.827545297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 00:20:26.978930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:20:26.986862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:20:27.204757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:20:27.209121 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:20:27.251883 kubelet[2028]: E0913 00:20:27.251746 2028 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:20:27.256473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:20:27.256729 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:20:29.685659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:20:29.693949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:20:29.720368 systemd[1]: Reloading requested from client PID 2059 ('systemctl') (unit session-7.scope)... Sep 13 00:20:29.720386 systemd[1]: Reloading... Sep 13 00:20:29.802659 zram_generator::config[2101]: No configuration found. Sep 13 00:20:30.714585 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:20:30.796113 systemd[1]: Reloading finished in 1075 ms. Sep 13 00:20:30.850878 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:20:30.851144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:20:30.854804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:20:31.032718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:20:31.037944 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:20:31.138696 kubelet[2147]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:20:31.138696 kubelet[2147]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:20:31.138696 kubelet[2147]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:20:31.139267 kubelet[2147]: I0913 00:20:31.138738 2147 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:20:31.549598 kubelet[2147]: I0913 00:20:31.549525 2147 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:20:31.549598 kubelet[2147]: I0913 00:20:31.549569 2147 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:20:31.549960 kubelet[2147]: I0913 00:20:31.549929 2147 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:20:31.590487 kubelet[2147]: I0913 00:20:31.589314 2147 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:20:31.594306 kubelet[2147]: E0913 00:20:31.594251 2147 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:20:31.603017 kubelet[2147]: E0913 00:20:31.602962 2147 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:20:31.603017 kubelet[2147]: I0913 00:20:31.603010 2147 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:20:31.609467 kubelet[2147]: I0913 00:20:31.609442 2147 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:20:31.609877 kubelet[2147]: I0913 00:20:31.609843 2147 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:20:31.610105 kubelet[2147]: I0913 00:20:31.609873 2147 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:20:31.610188 kubelet[2147]: I0913 00:20:31.610122 2147 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:20:31.610188 kubelet[2147]: I0913 00:20:31.610138 2147 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:20:31.610372 kubelet[2147]: I0913 00:20:31.610349 2147 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:20:31.613016 kubelet[2147]: I0913 00:20:31.612982 2147 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:20:31.613051 kubelet[2147]: I0913 00:20:31.613022 2147 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:20:31.613080 kubelet[2147]: I0913 00:20:31.613066 2147 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:20:31.613111 kubelet[2147]: I0913 00:20:31.613086 2147 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:20:31.620827 kubelet[2147]: E0913 00:20:31.620179 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:20:31.620827 kubelet[2147]: E0913 00:20:31.620648 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:20:31.621229 kubelet[2147]: I0913 00:20:31.621205 2147 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:20:31.621800 kubelet[2147]: I0913 00:20:31.621756 2147 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:20:31.622679 kubelet[2147]: W0913 00:20:31.622656 2147 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:20:31.626133 kubelet[2147]: I0913 00:20:31.626105 2147 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:20:31.626193 kubelet[2147]: I0913 00:20:31.626165 2147 server.go:1289] "Started kubelet" Sep 13 00:20:31.626312 kubelet[2147]: I0913 00:20:31.626251 2147 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:20:31.626904 kubelet[2147]: I0913 00:20:31.626876 2147 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:20:31.627149 kubelet[2147]: I0913 00:20:31.626876 2147 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:20:31.628177 kubelet[2147]: I0913 00:20:31.628161 2147 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:20:31.629139 kubelet[2147]: I0913 00:20:31.629108 2147 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:20:31.630521 kubelet[2147]: E0913 00:20:31.630503 2147 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:20:31.631830 kubelet[2147]: I0913 00:20:31.630687 2147 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:20:31.631830 kubelet[2147]: E0913 00:20:31.631187 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:31.631830 kubelet[2147]: E0913 00:20:31.629740 2147 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864af9a40ab5dfc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:20:31.626132988 +0000 UTC m=+0.583738217,LastTimestamp:2025-09-13 00:20:31.626132988 +0000 UTC m=+0.583738217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:20:31.631830 kubelet[2147]: I0913 00:20:31.631236 2147 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:20:31.631830 kubelet[2147]: I0913 00:20:31.631221 2147 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:20:31.631830 kubelet[2147]: I0913 00:20:31.631433 2147 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:20:31.631830 kubelet[2147]: E0913 00:20:31.631773 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:20:31.633309 kubelet[2147]: E0913 00:20:31.631783 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Sep 13 00:20:31.633343 kubelet[2147]: I0913 00:20:31.633310 2147 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:20:31.633441 kubelet[2147]: I0913 00:20:31.633410 2147 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:20:31.636528 kubelet[2147]: I0913 00:20:31.636419 2147 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:20:31.652339 kubelet[2147]: I0913 00:20:31.652307 2147 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:20:31.652503 kubelet[2147]: I0913 00:20:31.652471 2147 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:20:31.652503 kubelet[2147]: I0913 00:20:31.652499 2147 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:20:31.664146 kubelet[2147]: I0913 00:20:31.664101 2147 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:20:31.666525 kubelet[2147]: I0913 00:20:31.665911 2147 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:20:31.666525 kubelet[2147]: I0913 00:20:31.665954 2147 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:20:31.666525 kubelet[2147]: I0913 00:20:31.665982 2147 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:20:31.666525 kubelet[2147]: I0913 00:20:31.665996 2147 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:20:31.666525 kubelet[2147]: E0913 00:20:31.666056 2147 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:20:31.666993 kubelet[2147]: E0913 00:20:31.666834 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:20:31.731713 kubelet[2147]: E0913 00:20:31.731635 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:31.767047 kubelet[2147]: E0913 00:20:31.766942 2147 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:20:31.832551 kubelet[2147]: E0913 00:20:31.832354 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:31.832893 kubelet[2147]: E0913 00:20:31.832850 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Sep 13 00:20:31.933544 kubelet[2147]: E0913 00:20:31.933463 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:31.967843 kubelet[2147]: E0913 00:20:31.967731 2147 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:20:32.034290 kubelet[2147]: E0913 00:20:32.034219 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:32.135261 kubelet[2147]: E0913 00:20:32.135058 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:32.234183 kubelet[2147]: E0913 00:20:32.234111 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Sep 13 00:20:32.235263 kubelet[2147]: E0913 00:20:32.235228 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:32.335751 kubelet[2147]: E0913 00:20:32.335665 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:32.367945 kubelet[2147]: E0913 00:20:32.367878 2147 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:20:32.436762 kubelet[2147]: E0913 00:20:32.436544 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:32.437210 kubelet[2147]: E0913 00:20:32.437165 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:20:32.536989 kubelet[2147]: E0913 00:20:32.536907 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:32.637831 kubelet[2147]: E0913 00:20:32.637768 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:32.738373 kubelet[2147]: E0913 00:20:32.738308 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:32.839032 kubelet[2147]: E0913 00:20:32.838952 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:32.920934 kubelet[2147]: E0913 00:20:32.920866 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:20:32.933174 kubelet[2147]: E0913 00:20:32.933103 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:20:32.933469 kubelet[2147]: E0913 00:20:32.933435 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:20:32.939876 kubelet[2147]: E0913 00:20:32.939811 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:33.034990 kubelet[2147]: E0913 00:20:33.034837 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Sep 13 00:20:33.040806 kubelet[2147]: E0913 00:20:33.040771 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:33.141918 kubelet[2147]: E0913 00:20:33.141830 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:33.169076 kubelet[2147]: E0913 00:20:33.169024 2147 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:20:33.242483 kubelet[2147]: E0913 00:20:33.242452 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:33.343221 kubelet[2147]: E0913 00:20:33.343044 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:33.443978 kubelet[2147]: E0913 00:20:33.443884 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:33.544688 kubelet[2147]: E0913 00:20:33.544588 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:33.645311 kubelet[2147]: E0913 00:20:33.645190 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:33.745838 kubelet[2147]: E0913 00:20:33.745761 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:33.757009 kubelet[2147]: E0913 00:20:33.756959 2147 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:20:33.846864 kubelet[2147]: E0913 00:20:33.846774 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:33.947561 kubelet[2147]: E0913 00:20:33.947389 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:34.048039 kubelet[2147]: E0913 00:20:34.047957 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:34.148971 kubelet[2147]: E0913 00:20:34.148923 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:34.249430 kubelet[2147]: E0913 00:20:34.249368 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:34.349992 kubelet[2147]: E0913 00:20:34.349925 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:34.422801 kubelet[2147]: I0913 00:20:34.422749 2147 policy_none.go:49] "None policy: Start" Sep 13 00:20:34.422801 kubelet[2147]: I0913 00:20:34.422790 2147 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:20:34.422801 kubelet[2147]: I0913 00:20:34.422815 2147 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:20:34.450504 kubelet[2147]: E0913 00:20:34.450399 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:34.538199 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:20:34.551416 kubelet[2147]: E0913 00:20:34.551370 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:34.556662 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:20:34.560050 kubelet[2147]: E0913 00:20:34.560004 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:20:34.560315 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:20:34.578458 kubelet[2147]: E0913 00:20:34.578392 2147 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:20:34.578796 kubelet[2147]: I0913 00:20:34.578768 2147 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:20:34.578954 kubelet[2147]: I0913 00:20:34.578794 2147 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:20:34.579303 kubelet[2147]: I0913 00:20:34.579114 2147 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:20:34.579906 kubelet[2147]: E0913 00:20:34.579847 2147 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:20:34.579906 kubelet[2147]: E0913 00:20:34.579894 2147 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:20:34.635735 kubelet[2147]: E0913 00:20:34.635668 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="3.2s" Sep 13 00:20:34.680462 kubelet[2147]: I0913 00:20:34.680389 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:20:34.680857 kubelet[2147]: E0913 00:20:34.680807 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Sep 13 00:20:34.820412 systemd[1]: Created slice kubepods-burstable-pod7ba34e7ccbb81fe2421877e3fcee28dd.slice - libcontainer container kubepods-burstable-pod7ba34e7ccbb81fe2421877e3fcee28dd.slice. Sep 13 00:20:34.831836 kubelet[2147]: E0913 00:20:34.831785 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:20:34.853112 kubelet[2147]: I0913 00:20:34.853051 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:20:34.853238 kubelet[2147]: I0913 00:20:34.853116 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:20:34.853238 kubelet[2147]: I0913 00:20:34.853138 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:20:34.853238 kubelet[2147]: I0913 00:20:34.853155 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:20:34.853238 kubelet[2147]: I0913 00:20:34.853177 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:20:34.853238 kubelet[2147]: I0913 00:20:34.853214 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ba34e7ccbb81fe2421877e3fcee28dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7ba34e7ccbb81fe2421877e3fcee28dd\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:20:34.853375 kubelet[2147]: I0913 00:20:34.853256 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ba34e7ccbb81fe2421877e3fcee28dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7ba34e7ccbb81fe2421877e3fcee28dd\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:20:34.853375 kubelet[2147]: I0913 00:20:34.853275 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ba34e7ccbb81fe2421877e3fcee28dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7ba34e7ccbb81fe2421877e3fcee28dd\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:20:34.882343 kubelet[2147]: I0913 00:20:34.882306 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:20:34.882941 kubelet[2147]: E0913 00:20:34.882900 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Sep 13 00:20:34.890769 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 13 00:20:34.892940 kubelet[2147]: E0913 00:20:34.892903 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:20:34.954220 kubelet[2147]: I0913 00:20:34.954189 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:20:35.061730 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 13 00:20:35.063554 kubelet[2147]: E0913 00:20:35.063526 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:20:35.063996 kubelet[2147]: E0913 00:20:35.063975 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:35.064643 containerd[1470]: time="2025-09-13T00:20:35.064585550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 13 00:20:35.131242 kubelet[2147]: E0913 00:20:35.131018 2147 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864af9a40ab5dfc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:20:31.626132988 +0000 UTC m=+0.583738217,LastTimestamp:2025-09-13 00:20:31.626132988 +0000 UTC m=+0.583738217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:20:35.132266 kubelet[2147]: E0913 00:20:35.132216 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:35.132808 containerd[1470]: time="2025-09-13T00:20:35.132772809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7ba34e7ccbb81fe2421877e3fcee28dd,Namespace:kube-system,Attempt:0,}" Sep 13 00:20:35.194338 kubelet[2147]: E0913 00:20:35.194282 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:35.194981 containerd[1470]: time="2025-09-13T00:20:35.194921274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 13 00:20:35.285053 kubelet[2147]: I0913 00:20:35.284999 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:20:35.285591 kubelet[2147]: E0913 00:20:35.285506 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Sep 13 00:20:35.412511 kubelet[2147]: E0913 00:20:35.412328 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:20:35.461207 kubelet[2147]: E0913 00:20:35.461145 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:20:35.623313 kubelet[2147]: E0913 00:20:35.623224 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:20:36.134897 kubelet[2147]: I0913 00:20:36.134827 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:20:36.137290 kubelet[2147]: E0913 00:20:36.135205 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Sep 13 00:20:36.385257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3537230825.mount: Deactivated successfully. Sep 13 00:20:36.392334 containerd[1470]: time="2025-09-13T00:20:36.392250558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:20:36.393239 containerd[1470]: time="2025-09-13T00:20:36.393152686Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:20:36.394387 containerd[1470]: time="2025-09-13T00:20:36.394342511Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:20:36.395594 containerd[1470]: time="2025-09-13T00:20:36.395553017Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:20:36.396703 containerd[1470]: time="2025-09-13T00:20:36.396637409Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:20:36.397921 containerd[1470]: time="2025-09-13T00:20:36.397883112Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:20:36.398867 containerd[1470]: time="2025-09-13T00:20:36.398808555Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:20:36.401271 containerd[1470]: time="2025-09-13T00:20:36.401229055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:20:36.403169 containerd[1470]: time="2025-09-13T00:20:36.403113186Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.338409437s" Sep 13 00:20:36.403817 containerd[1470]: time="2025-09-13T00:20:36.403755511Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.270902736s" Sep 13 00:20:36.407023 containerd[1470]: time="2025-09-13T00:20:36.406973545Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.211955374s" Sep 13 00:20:36.626243 containerd[1470]: time="2025-09-13T00:20:36.626126521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:20:36.626243 containerd[1470]: time="2025-09-13T00:20:36.626183330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:20:36.626243 containerd[1470]: time="2025-09-13T00:20:36.626207948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:20:36.626511 containerd[1470]: time="2025-09-13T00:20:36.626318733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:20:36.630250 containerd[1470]: time="2025-09-13T00:20:36.629532039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:20:36.630250 containerd[1470]: time="2025-09-13T00:20:36.629698541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:20:36.630250 containerd[1470]: time="2025-09-13T00:20:36.629722298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:20:36.630250 containerd[1470]: time="2025-09-13T00:20:36.629826489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:20:36.656941 systemd[1]: Started cri-containerd-724b30f9b7db7c80977043a2f3352d8275bef4253e4054dee946f3fb0987ceea.scope - libcontainer container 724b30f9b7db7c80977043a2f3352d8275bef4253e4054dee946f3fb0987ceea. Sep 13 00:20:36.667384 containerd[1470]: time="2025-09-13T00:20:36.662590263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:20:36.667384 containerd[1470]: time="2025-09-13T00:20:36.662703993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:20:36.667384 containerd[1470]: time="2025-09-13T00:20:36.662719123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:20:36.667384 containerd[1470]: time="2025-09-13T00:20:36.664685953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:20:36.686731 systemd[1]: Started cri-containerd-7243e58ab658baf925c498127c551884b47ceac5f64094454329d1174d191e27.scope - libcontainer container 7243e58ab658baf925c498127c551884b47ceac5f64094454329d1174d191e27. Sep 13 00:20:36.695440 systemd[1]: Started cri-containerd-29ad39f920261403a47034dd1e6b1bce4b3dd5d8fb4ffaea908841a9a3691c10.scope - libcontainer container 29ad39f920261403a47034dd1e6b1bce4b3dd5d8fb4ffaea908841a9a3691c10. Sep 13 00:20:36.764459 containerd[1470]: time="2025-09-13T00:20:36.764332851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"724b30f9b7db7c80977043a2f3352d8275bef4253e4054dee946f3fb0987ceea\"" Sep 13 00:20:36.765583 kubelet[2147]: E0913 00:20:36.765554 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:36.772665 containerd[1470]: time="2025-09-13T00:20:36.772558208Z" level=info msg="CreateContainer within sandbox \"724b30f9b7db7c80977043a2f3352d8275bef4253e4054dee946f3fb0987ceea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:20:36.773045 containerd[1470]: time="2025-09-13T00:20:36.772981980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7ba34e7ccbb81fe2421877e3fcee28dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7243e58ab658baf925c498127c551884b47ceac5f64094454329d1174d191e27\"" Sep 13 00:20:36.773736 kubelet[2147]: E0913 00:20:36.773681 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:36.778362 containerd[1470]: time="2025-09-13T00:20:36.778319812Z" level=info msg="CreateContainer within sandbox \"7243e58ab658baf925c498127c551884b47ceac5f64094454329d1174d191e27\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:20:36.788298 containerd[1470]: time="2025-09-13T00:20:36.788248020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"29ad39f920261403a47034dd1e6b1bce4b3dd5d8fb4ffaea908841a9a3691c10\"" Sep 13 00:20:36.789319 kubelet[2147]: E0913 00:20:36.789290 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:36.795873 containerd[1470]: time="2025-09-13T00:20:36.795814180Z" level=info msg="CreateContainer within sandbox \"29ad39f920261403a47034dd1e6b1bce4b3dd5d8fb4ffaea908841a9a3691c10\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:20:36.798790 containerd[1470]: time="2025-09-13T00:20:36.798738105Z" level=info msg="CreateContainer within sandbox \"724b30f9b7db7c80977043a2f3352d8275bef4253e4054dee946f3fb0987ceea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4cdc83aba0e55e652cb05cc14e75b1d10828e7ade516a0ee071b939483b5ecd6\"" Sep 13 00:20:36.799372 containerd[1470]: time="2025-09-13T00:20:36.799339050Z" level=info msg="StartContainer for \"4cdc83aba0e55e652cb05cc14e75b1d10828e7ade516a0ee071b939483b5ecd6\"" Sep 13 00:20:36.814423 containerd[1470]: time="2025-09-13T00:20:36.814257746Z" level=info msg="CreateContainer within sandbox \"7243e58ab658baf925c498127c551884b47ceac5f64094454329d1174d191e27\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8e675d507d786d27e960d75afa1d0a14d532672aa0d4593e4d390fe42c6eb05e\"" Sep 13 00:20:36.815133 containerd[1470]: time="2025-09-13T00:20:36.815092443Z" level=info msg="StartContainer for \"8e675d507d786d27e960d75afa1d0a14d532672aa0d4593e4d390fe42c6eb05e\"" Sep 13 00:20:36.820153 containerd[1470]: time="2025-09-13T00:20:36.820112530Z" level=info msg="CreateContainer within sandbox \"29ad39f920261403a47034dd1e6b1bce4b3dd5d8fb4ffaea908841a9a3691c10\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0400cc8c78e27da8112774560e81d42777ad261e9e7f2851dbf7bbe726087a1c\"" Sep 13 00:20:36.820722 containerd[1470]: time="2025-09-13T00:20:36.820672405Z" level=info msg="StartContainer for \"0400cc8c78e27da8112774560e81d42777ad261e9e7f2851dbf7bbe726087a1c\"" Sep 13 00:20:36.849917 systemd[1]: Started cri-containerd-4cdc83aba0e55e652cb05cc14e75b1d10828e7ade516a0ee071b939483b5ecd6.scope - libcontainer container 4cdc83aba0e55e652cb05cc14e75b1d10828e7ade516a0ee071b939483b5ecd6. Sep 13 00:20:36.865816 systemd[1]: Started cri-containerd-8e675d507d786d27e960d75afa1d0a14d532672aa0d4593e4d390fe42c6eb05e.scope - libcontainer container 8e675d507d786d27e960d75afa1d0a14d532672aa0d4593e4d390fe42c6eb05e. Sep 13 00:20:36.882594 systemd[1]: Started cri-containerd-0400cc8c78e27da8112774560e81d42777ad261e9e7f2851dbf7bbe726087a1c.scope - libcontainer container 0400cc8c78e27da8112774560e81d42777ad261e9e7f2851dbf7bbe726087a1c. Sep 13 00:20:36.920321 containerd[1470]: time="2025-09-13T00:20:36.919457873Z" level=info msg="StartContainer for \"4cdc83aba0e55e652cb05cc14e75b1d10828e7ade516a0ee071b939483b5ecd6\" returns successfully" Sep 13 00:20:36.925916 containerd[1470]: time="2025-09-13T00:20:36.925793079Z" level=info msg="StartContainer for \"8e675d507d786d27e960d75afa1d0a14d532672aa0d4593e4d390fe42c6eb05e\" returns successfully" Sep 13 00:20:36.947753 containerd[1470]: time="2025-09-13T00:20:36.947612206Z" level=info msg="StartContainer for \"0400cc8c78e27da8112774560e81d42777ad261e9e7f2851dbf7bbe726087a1c\" returns successfully" Sep 13 00:20:37.681568 kubelet[2147]: E0913 00:20:37.681348 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:20:37.681568 kubelet[2147]: E0913 00:20:37.681483 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:37.686111 kubelet[2147]: E0913 00:20:37.686094 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:20:37.686796 kubelet[2147]: E0913 00:20:37.686337 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:37.686796 kubelet[2147]: E0913 00:20:37.686659 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:20:37.686796 kubelet[2147]: E0913 00:20:37.686744 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:37.738187 kubelet[2147]: I0913 00:20:37.738151 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:20:38.628668 kubelet[2147]: E0913 00:20:38.628545 2147 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:20:38.662957 kubelet[2147]: I0913 00:20:38.662390 2147 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:20:38.662957 kubelet[2147]: E0913 00:20:38.662443 2147 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:20:38.689376 kubelet[2147]: E0913 00:20:38.689316 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:38.690555 kubelet[2147]: E0913 00:20:38.690521 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:20:38.691003 kubelet[2147]: E0913 00:20:38.690706 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:38.691277 kubelet[2147]: E0913 00:20:38.691253 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:20:38.691550 kubelet[2147]: E0913 00:20:38.691524 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:38.790404 kubelet[2147]: E0913 00:20:38.790342 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:38.891116 kubelet[2147]: E0913 00:20:38.890964 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:38.991640 kubelet[2147]: E0913 00:20:38.991568 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:39.091927 kubelet[2147]: E0913 00:20:39.091854 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:39.192100 kubelet[2147]: E0913 00:20:39.191960 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:39.292696 kubelet[2147]: E0913 00:20:39.292598 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:39.393207 kubelet[2147]: E0913 00:20:39.393164 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:39.493806 kubelet[2147]: E0913 00:20:39.493741 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:39.594671 kubelet[2147]: E0913 00:20:39.594602 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:39.695370 kubelet[2147]: E0913 00:20:39.695313 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:39.796479 kubelet[2147]: E0913 00:20:39.796347 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:39.897018 kubelet[2147]: E0913 00:20:39.896957 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:20:40.032025 kubelet[2147]: I0913 00:20:40.031974 2147 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:20:40.040700 kubelet[2147]: I0913 00:20:40.040665 2147 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:20:40.045034 kubelet[2147]: I0913 00:20:40.044996 2147 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:20:40.556789 systemd[1]: Reloading requested from client PID 2441 ('systemctl') (unit session-7.scope)... Sep 13 00:20:40.556819 systemd[1]: Reloading... Sep 13 00:20:40.624192 kubelet[2147]: I0913 00:20:40.623837 2147 apiserver.go:52] "Watching apiserver" Sep 13 00:20:40.627641 kubelet[2147]: E0913 00:20:40.627090 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:40.631962 kubelet[2147]: E0913 00:20:40.627961 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:40.632056 kubelet[2147]: E0913 00:20:40.628187 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:40.632194 kubelet[2147]: I0913 00:20:40.631765 2147 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:20:40.641712 zram_generator::config[2483]: No configuration found. Sep 13 00:20:40.765437 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:20:40.860812 systemd[1]: Reloading finished in 303 ms. Sep 13 00:20:40.913575 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:20:40.926140 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:20:40.926410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:20:40.926462 systemd[1]: kubelet.service: Consumed 1.180s CPU time, 131.0M memory peak, 0B memory swap peak. Sep 13 00:20:40.944505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:20:41.137047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:20:41.142059 (kubelet)[2525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:20:41.193899 kubelet[2525]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:20:41.193899 kubelet[2525]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:20:41.193899 kubelet[2525]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:20:41.194332 kubelet[2525]: I0913 00:20:41.193943 2525 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:20:41.204132 kubelet[2525]: I0913 00:20:41.204068 2525 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:20:41.204132 kubelet[2525]: I0913 00:20:41.204117 2525 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:20:41.204405 kubelet[2525]: I0913 00:20:41.204382 2525 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:20:41.205862 kubelet[2525]: I0913 00:20:41.205842 2525 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 00:20:41.208138 kubelet[2525]: I0913 00:20:41.208096 2525 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:20:41.211530 kubelet[2525]: E0913 00:20:41.211474 2525 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:20:41.211530 kubelet[2525]: I0913 00:20:41.211522 2525 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:20:41.217903 kubelet[2525]: I0913 00:20:41.217876 2525 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:20:41.218217 kubelet[2525]: I0913 00:20:41.218176 2525 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:20:41.218379 kubelet[2525]: I0913 00:20:41.218208 2525 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:20:41.218480 kubelet[2525]: I0913 00:20:41.218383 2525 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:20:41.218480 kubelet[2525]: I0913 00:20:41.218398 2525 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:20:41.219321 kubelet[2525]: I0913 00:20:41.219286 2525 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:20:41.220080 kubelet[2525]: I0913 00:20:41.219500 2525 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:20:41.220080 kubelet[2525]: I0913 00:20:41.219523 2525 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:20:41.220080 kubelet[2525]: I0913 00:20:41.219555 2525 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:20:41.220080 kubelet[2525]: I0913 00:20:41.219582 2525 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:20:41.223637 kubelet[2525]: I0913 00:20:41.222944 2525 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:20:41.225720 kubelet[2525]: I0913 00:20:41.224211 2525 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:20:41.233537 kubelet[2525]: I0913 00:20:41.233496 2525 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:20:41.233609 kubelet[2525]: I0913 00:20:41.233573 2525 server.go:1289] "Started kubelet" Sep 13 00:20:41.234929 kubelet[2525]: I0913 00:20:41.234879 2525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:20:41.235164 kubelet[2525]: I0913 00:20:41.235119 2525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:20:41.235205 kubelet[2525]: I0913 00:20:41.235187 2525 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:20:41.235258 kubelet[2525]: I0913 00:20:41.235229 2525 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:20:41.235897 kubelet[2525]: I0913 00:20:41.235821 2525 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:20:41.236486 kubelet[2525]: I0913 00:20:41.236262 2525 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:20:41.237944 kubelet[2525]: I0913 00:20:41.236601 2525 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:20:41.237944 kubelet[2525]: I0913 00:20:41.236674 2525 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:20:41.237944 kubelet[2525]: I0913 00:20:41.236760 2525 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:20:41.237944 kubelet[2525]: I0913 00:20:41.237885 2525 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:20:41.238509 kubelet[2525]: I0913 00:20:41.238481 2525 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:20:41.240575 kubelet[2525]: I0913 00:20:41.240539 2525 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:20:41.240757 kubelet[2525]: E0913 00:20:41.240722 2525 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:20:41.253497 kubelet[2525]: I0913 00:20:41.253446 2525 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:20:41.256272 kubelet[2525]: I0913 00:20:41.256123 2525 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:20:41.256272 kubelet[2525]: I0913 00:20:41.256169 2525 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:20:41.256272 kubelet[2525]: I0913 00:20:41.256199 2525 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:20:41.256272 kubelet[2525]: I0913 00:20:41.256258 2525 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:20:41.256420 kubelet[2525]: E0913 00:20:41.256342 2525 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:20:41.291264 kubelet[2525]: I0913 00:20:41.291225 2525 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:20:41.291264 kubelet[2525]: I0913 00:20:41.291252 2525 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:20:41.291420 kubelet[2525]: I0913 00:20:41.291284 2525 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:20:41.291498 kubelet[2525]: I0913 00:20:41.291478 2525 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:20:41.291534 kubelet[2525]: I0913 00:20:41.291500 2525 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:20:41.291534 kubelet[2525]: I0913 00:20:41.291526 2525 policy_none.go:49] "None policy: Start" Sep 13 00:20:41.291571 kubelet[2525]: I0913 00:20:41.291537 2525 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:20:41.291571 kubelet[2525]: I0913 00:20:41.291550 2525 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:20:41.291698 kubelet[2525]: I0913 00:20:41.291681 2525 state_mem.go:75] "Updated machine memory state" Sep 13 00:20:41.296691 kubelet[2525]: E0913 00:20:41.296136 2525 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:20:41.296691 kubelet[2525]: I0913 00:20:41.296481 2525 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:20:41.296691 kubelet[2525]: I0913 00:20:41.296515 2525 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:20:41.297146 kubelet[2525]: I0913 00:20:41.296972 2525 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:20:41.299349 kubelet[2525]: E0913 00:20:41.299299 2525 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:20:41.357607 kubelet[2525]: I0913 00:20:41.357553 2525 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:20:41.357834 kubelet[2525]: I0913 00:20:41.357767 2525 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:20:41.358081 kubelet[2525]: I0913 00:20:41.357874 2525 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:20:41.364161 kubelet[2525]: E0913 00:20:41.364027 2525 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:20:41.364161 kubelet[2525]: E0913 00:20:41.364094 2525 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:20:41.364333 kubelet[2525]: E0913 00:20:41.364297 2525 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:20:41.404895 kubelet[2525]: I0913 00:20:41.404728 2525 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:20:41.412159 kubelet[2525]: I0913 00:20:41.412121 2525 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 00:20:41.412243 kubelet[2525]: I0913 00:20:41.412228 2525 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:20:41.437786 kubelet[2525]: I0913 00:20:41.437738 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ba34e7ccbb81fe2421877e3fcee28dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7ba34e7ccbb81fe2421877e3fcee28dd\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:20:41.437786 kubelet[2525]: I0913 00:20:41.437784 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:20:41.438056 kubelet[2525]: I0913 00:20:41.437804 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:20:41.438056 kubelet[2525]: I0913 00:20:41.437833 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:20:41.438056 kubelet[2525]: I0913 00:20:41.437852 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:20:41.438056 kubelet[2525]: I0913 00:20:41.437866 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ba34e7ccbb81fe2421877e3fcee28dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7ba34e7ccbb81fe2421877e3fcee28dd\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:20:41.438056 kubelet[2525]: I0913 00:20:41.437884 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:20:41.438187 kubelet[2525]: I0913 00:20:41.437975 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:20:41.438187 kubelet[2525]: I0913 00:20:41.438013 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ba34e7ccbb81fe2421877e3fcee28dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7ba34e7ccbb81fe2421877e3fcee28dd\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:20:41.664922 kubelet[2525]: E0913 00:20:41.664591 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:41.664922 kubelet[2525]: E0913 00:20:41.664678 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:41.664922 kubelet[2525]: E0913 00:20:41.664781 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:42.220965 kubelet[2525]: I0913 00:20:42.220900 2525 apiserver.go:52] "Watching apiserver" Sep 13 00:20:42.237431 kubelet[2525]: I0913 00:20:42.237392 2525 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:20:42.274488 kubelet[2525]: E0913 00:20:42.274429 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:42.276083 kubelet[2525]: I0913 00:20:42.275114 2525 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:20:42.276083 kubelet[2525]: E0913 00:20:42.275510 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:42.340873 kubelet[2525]: E0913 00:20:42.340786 2525 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:20:42.341038 kubelet[2525]: E0913 00:20:42.341009 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:42.372636 kubelet[2525]: I0913 00:20:42.372560 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.372537372 podStartE2EDuration="2.372537372s" podCreationTimestamp="2025-09-13 00:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:20:42.36437559 +0000 UTC m=+1.217817271" watchObservedRunningTime="2025-09-13 00:20:42.372537372 +0000 UTC m=+1.225979053" Sep 13 00:20:42.380773 kubelet[2525]: I0913 00:20:42.380528 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.380516513 podStartE2EDuration="2.380516513s" podCreationTimestamp="2025-09-13 00:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:20:42.372745201 +0000 UTC m=+1.226186883" watchObservedRunningTime="2025-09-13 00:20:42.380516513 +0000 UTC m=+1.233958194" Sep 13 00:20:42.401955 kubelet[2525]: I0913 00:20:42.401862 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.401838844 podStartE2EDuration="2.401838844s" podCreationTimestamp="2025-09-13 00:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:20:42.380668695 +0000 UTC m=+1.234110436" watchObservedRunningTime="2025-09-13 00:20:42.401838844 +0000 UTC m=+1.255280535" Sep 13 00:20:43.276410 kubelet[2525]: E0913 00:20:43.276365 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:43.276947 kubelet[2525]: E0913 00:20:43.276701 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:44.286780 kubelet[2525]: E0913 00:20:44.286739 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:45.203689 kubelet[2525]: E0913 00:20:45.203647 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:46.564464 kubelet[2525]: I0913 00:20:46.564427 2525 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:20:46.564971 containerd[1470]: time="2025-09-13T00:20:46.564825766Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:20:46.565251 kubelet[2525]: I0913 00:20:46.565041 2525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:20:46.843739 systemd[1]: Created slice kubepods-besteffort-pod93d38106_bf2b_4015_ba11_1187a32bf980.slice - libcontainer container kubepods-besteffort-pod93d38106_bf2b_4015_ba11_1187a32bf980.slice. Sep 13 00:20:46.868993 kubelet[2525]: I0913 00:20:46.868943 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/93d38106-bf2b-4015-ba11-1187a32bf980-kube-proxy\") pod \"kube-proxy-5xlcg\" (UID: \"93d38106-bf2b-4015-ba11-1187a32bf980\") " pod="kube-system/kube-proxy-5xlcg" Sep 13 00:20:46.868993 kubelet[2525]: I0913 00:20:46.868981 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93d38106-bf2b-4015-ba11-1187a32bf980-xtables-lock\") pod \"kube-proxy-5xlcg\" (UID: \"93d38106-bf2b-4015-ba11-1187a32bf980\") " pod="kube-system/kube-proxy-5xlcg" Sep 13 00:20:46.869165 kubelet[2525]: I0913 00:20:46.869004 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqcdl\" (UniqueName: \"kubernetes.io/projected/93d38106-bf2b-4015-ba11-1187a32bf980-kube-api-access-kqcdl\") pod \"kube-proxy-5xlcg\" (UID: \"93d38106-bf2b-4015-ba11-1187a32bf980\") " pod="kube-system/kube-proxy-5xlcg" Sep 13 00:20:46.869165 kubelet[2525]: I0913 00:20:46.869049 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93d38106-bf2b-4015-ba11-1187a32bf980-lib-modules\") pod \"kube-proxy-5xlcg\" (UID: \"93d38106-bf2b-4015-ba11-1187a32bf980\") " pod="kube-system/kube-proxy-5xlcg" Sep 13 00:20:46.975261 kubelet[2525]: E0913 00:20:46.975216 2525 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:20:46.975261 kubelet[2525]: E0913 00:20:46.975244 2525 projected.go:194] Error preparing data for projected volume kube-api-access-kqcdl for pod kube-system/kube-proxy-5xlcg: configmap "kube-root-ca.crt" not found Sep 13 00:20:46.975421 kubelet[2525]: E0913 00:20:46.975306 2525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/93d38106-bf2b-4015-ba11-1187a32bf980-kube-api-access-kqcdl podName:93d38106-bf2b-4015-ba11-1187a32bf980 nodeName:}" failed. No retries permitted until 2025-09-13 00:20:47.475285611 +0000 UTC m=+6.328727292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kqcdl" (UniqueName: "kubernetes.io/projected/93d38106-bf2b-4015-ba11-1187a32bf980-kube-api-access-kqcdl") pod "kube-proxy-5xlcg" (UID: "93d38106-bf2b-4015-ba11-1187a32bf980") : configmap "kube-root-ca.crt" not found Sep 13 00:20:47.544601 kubelet[2525]: E0913 00:20:47.544557 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:47.574993 kubelet[2525]: E0913 00:20:47.574911 2525 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:20:47.574993 kubelet[2525]: E0913 00:20:47.574965 2525 projected.go:194] Error preparing data for projected volume kube-api-access-kqcdl for pod kube-system/kube-proxy-5xlcg: configmap "kube-root-ca.crt" not found Sep 13 00:20:47.575572 kubelet[2525]: E0913 00:20:47.575037 2525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/93d38106-bf2b-4015-ba11-1187a32bf980-kube-api-access-kqcdl podName:93d38106-bf2b-4015-ba11-1187a32bf980 nodeName:}" failed. No retries permitted until 2025-09-13 00:20:48.57501618 +0000 UTC m=+7.428457861 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kqcdl" (UniqueName: "kubernetes.io/projected/93d38106-bf2b-4015-ba11-1187a32bf980-kube-api-access-kqcdl") pod "kube-proxy-5xlcg" (UID: "93d38106-bf2b-4015-ba11-1187a32bf980") : configmap "kube-root-ca.crt" not found Sep 13 00:20:47.764256 update_engine[1453]: I20250913 00:20:47.764169 1453 update_attempter.cc:509] Updating boot flags... Sep 13 00:20:48.061677 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2584) Sep 13 00:20:48.110879 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2583) Sep 13 00:20:48.149665 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2583) Sep 13 00:20:48.283788 kubelet[2525]: E0913 00:20:48.283742 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:48.654842 kubelet[2525]: E0913 00:20:48.654787 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:48.655587 containerd[1470]: time="2025-09-13T00:20:48.655537744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xlcg,Uid:93d38106-bf2b-4015-ba11-1187a32bf980,Namespace:kube-system,Attempt:0,}" Sep 13 00:20:49.144672 systemd[1]: Created slice kubepods-besteffort-pod2618c392_d926_4b98_b396_3c0d1840992c.slice - libcontainer container kubepods-besteffort-pod2618c392_d926_4b98_b396_3c0d1840992c.slice. Sep 13 00:20:49.185856 kubelet[2525]: I0913 00:20:49.185791 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2618c392-d926-4b98-b396-3c0d1840992c-var-lib-calico\") pod \"tigera-operator-755d956888-wznmv\" (UID: \"2618c392-d926-4b98-b396-3c0d1840992c\") " pod="tigera-operator/tigera-operator-755d956888-wznmv" Sep 13 00:20:49.185856 kubelet[2525]: I0913 00:20:49.185846 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9kmq\" (UniqueName: \"kubernetes.io/projected/2618c392-d926-4b98-b396-3c0d1840992c-kube-api-access-x9kmq\") pod \"tigera-operator-755d956888-wznmv\" (UID: \"2618c392-d926-4b98-b396-3c0d1840992c\") " pod="tigera-operator/tigera-operator-755d956888-wznmv" Sep 13 00:20:49.212929 containerd[1470]: time="2025-09-13T00:20:49.212046432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:20:49.212929 containerd[1470]: time="2025-09-13T00:20:49.212843963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:20:49.212929 containerd[1470]: time="2025-09-13T00:20:49.212862367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:20:49.213506 containerd[1470]: time="2025-09-13T00:20:49.213006012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:20:49.240835 systemd[1]: Started cri-containerd-ea373dc6b0bd1985ab233d4926564e1065ad15a57e6ecbe5d722ae4b5cff7698.scope - libcontainer container ea373dc6b0bd1985ab233d4926564e1065ad15a57e6ecbe5d722ae4b5cff7698. Sep 13 00:20:49.265736 containerd[1470]: time="2025-09-13T00:20:49.265674923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xlcg,Uid:93d38106-bf2b-4015-ba11-1187a32bf980,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea373dc6b0bd1985ab233d4926564e1065ad15a57e6ecbe5d722ae4b5cff7698\"" Sep 13 00:20:49.266686 kubelet[2525]: E0913 00:20:49.266466 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:49.441191 kubelet[2525]: E0913 00:20:49.441062 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:49.441325 containerd[1470]: time="2025-09-13T00:20:49.441213016Z" level=info msg="CreateContainer within sandbox \"ea373dc6b0bd1985ab233d4926564e1065ad15a57e6ecbe5d722ae4b5cff7698\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:20:49.447764 containerd[1470]: time="2025-09-13T00:20:49.447712047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-wznmv,Uid:2618c392-d926-4b98-b396-3c0d1840992c,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:20:49.686370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1612446194.mount: Deactivated successfully. Sep 13 00:20:49.697764 containerd[1470]: time="2025-09-13T00:20:49.697600671Z" level=info msg="CreateContainer within sandbox \"ea373dc6b0bd1985ab233d4926564e1065ad15a57e6ecbe5d722ae4b5cff7698\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eff0694f02a920cd3851b37f88199dfccbe073b76cab270ed2ca34d94e52b0af\"" Sep 13 00:20:49.698810 containerd[1470]: time="2025-09-13T00:20:49.698769790Z" level=info msg="StartContainer for \"eff0694f02a920cd3851b37f88199dfccbe073b76cab270ed2ca34d94e52b0af\"" Sep 13 00:20:49.712782 containerd[1470]: time="2025-09-13T00:20:49.712593747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:20:49.713549 containerd[1470]: time="2025-09-13T00:20:49.713452034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:20:49.713549 containerd[1470]: time="2025-09-13T00:20:49.713486890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:20:49.713799 containerd[1470]: time="2025-09-13T00:20:49.713751666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:20:49.740942 systemd[1]: Started cri-containerd-eff0694f02a920cd3851b37f88199dfccbe073b76cab270ed2ca34d94e52b0af.scope - libcontainer container eff0694f02a920cd3851b37f88199dfccbe073b76cab270ed2ca34d94e52b0af. Sep 13 00:20:49.746488 systemd[1]: Started cri-containerd-f4576c6afd59ce26f3aced6c475e921c682be6831c9b20760a5c81d0cd60dd94.scope - libcontainer container f4576c6afd59ce26f3aced6c475e921c682be6831c9b20760a5c81d0cd60dd94. Sep 13 00:20:49.777006 containerd[1470]: time="2025-09-13T00:20:49.776962096Z" level=info msg="StartContainer for \"eff0694f02a920cd3851b37f88199dfccbe073b76cab270ed2ca34d94e52b0af\" returns successfully" Sep 13 00:20:49.797475 containerd[1470]: time="2025-09-13T00:20:49.797431985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-wznmv,Uid:2618c392-d926-4b98-b396-3c0d1840992c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f4576c6afd59ce26f3aced6c475e921c682be6831c9b20760a5c81d0cd60dd94\"" Sep 13 00:20:49.800525 containerd[1470]: time="2025-09-13T00:20:49.799697456Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:20:50.290482 kubelet[2525]: E0913 00:20:50.290295 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:50.299810 kubelet[2525]: I0913 00:20:50.299732 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5xlcg" podStartSLOduration=4.299710753 podStartE2EDuration="4.299710753s" podCreationTimestamp="2025-09-13 00:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:20:50.299419629 +0000 UTC m=+9.152861310" watchObservedRunningTime="2025-09-13 00:20:50.299710753 +0000 UTC m=+9.153152434" Sep 13 00:20:51.287573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132504772.mount: Deactivated successfully. Sep 13 00:20:51.294183 kubelet[2525]: E0913 00:20:51.294153 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:51.867526 containerd[1470]: time="2025-09-13T00:20:51.867436578Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:51.868260 containerd[1470]: time="2025-09-13T00:20:51.868216562Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:20:51.871428 containerd[1470]: time="2025-09-13T00:20:51.870311862Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:51.873130 containerd[1470]: time="2025-09-13T00:20:51.873086534Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:20:51.874021 containerd[1470]: time="2025-09-13T00:20:51.873974063Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.074222495s" Sep 13 00:20:51.874021 containerd[1470]: time="2025-09-13T00:20:51.874014210Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:20:51.879086 containerd[1470]: time="2025-09-13T00:20:51.879031041Z" level=info msg="CreateContainer within sandbox \"f4576c6afd59ce26f3aced6c475e921c682be6831c9b20760a5c81d0cd60dd94\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:20:51.894403 containerd[1470]: time="2025-09-13T00:20:51.894332271Z" level=info msg="CreateContainer within sandbox \"f4576c6afd59ce26f3aced6c475e921c682be6831c9b20760a5c81d0cd60dd94\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1be3db30cc37f88c26519626c39e30c067cdbf6e06343e2a1274e681df56b85c\"" Sep 13 00:20:51.895160 containerd[1470]: time="2025-09-13T00:20:51.895118839Z" level=info msg="StartContainer for \"1be3db30cc37f88c26519626c39e30c067cdbf6e06343e2a1274e681df56b85c\"" Sep 13 00:20:51.931826 systemd[1]: Started cri-containerd-1be3db30cc37f88c26519626c39e30c067cdbf6e06343e2a1274e681df56b85c.scope - libcontainer container 1be3db30cc37f88c26519626c39e30c067cdbf6e06343e2a1274e681df56b85c. Sep 13 00:20:52.191325 containerd[1470]: time="2025-09-13T00:20:52.191083213Z" level=info msg="StartContainer for \"1be3db30cc37f88c26519626c39e30c067cdbf6e06343e2a1274e681df56b85c\" returns successfully" Sep 13 00:20:54.292955 kubelet[2525]: E0913 00:20:54.292895 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:54.315540 kubelet[2525]: I0913 00:20:54.315456 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-wznmv" podStartSLOduration=4.239710935 podStartE2EDuration="6.31543533s" podCreationTimestamp="2025-09-13 00:20:48 +0000 UTC" firstStartedPulling="2025-09-13 00:20:49.799134883 +0000 UTC m=+8.652576564" lastFinishedPulling="2025-09-13 00:20:51.874859278 +0000 UTC m=+10.728300959" observedRunningTime="2025-09-13 00:20:52.515426172 +0000 UTC m=+11.368867863" watchObservedRunningTime="2025-09-13 00:20:54.31543533 +0000 UTC m=+13.168877011" Sep 13 00:20:55.220656 kubelet[2525]: E0913 00:20:55.218333 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:20:57.334661 sudo[1645]: pam_unix(sudo:session): session closed for user root Sep 13 00:20:57.337189 sshd[1642]: pam_unix(sshd:session): session closed for user core Sep 13 00:20:57.346212 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:55642.service: Deactivated successfully. Sep 13 00:20:57.349154 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:20:57.351321 systemd[1]: session-7.scope: Consumed 5.376s CPU time, 159.9M memory peak, 0B memory swap peak. Sep 13 00:20:57.354570 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:20:57.356210 systemd-logind[1449]: Removed session 7. Sep 13 00:21:01.498232 systemd[1]: Created slice kubepods-besteffort-podde558ef9_7fd6_4be1_9192_d790b61757da.slice - libcontainer container kubepods-besteffort-podde558ef9_7fd6_4be1_9192_d790b61757da.slice. Sep 13 00:21:01.566428 kubelet[2525]: I0913 00:21:01.566315 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de558ef9-7fd6-4be1-9192-d790b61757da-tigera-ca-bundle\") pod \"calico-typha-7dbbf97c7f-fs8gj\" (UID: \"de558ef9-7fd6-4be1-9192-d790b61757da\") " pod="calico-system/calico-typha-7dbbf97c7f-fs8gj" Sep 13 00:21:01.566428 kubelet[2525]: I0913 00:21:01.566401 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tvlg\" (UniqueName: \"kubernetes.io/projected/de558ef9-7fd6-4be1-9192-d790b61757da-kube-api-access-7tvlg\") pod \"calico-typha-7dbbf97c7f-fs8gj\" (UID: \"de558ef9-7fd6-4be1-9192-d790b61757da\") " pod="calico-system/calico-typha-7dbbf97c7f-fs8gj" Sep 13 00:21:01.566428 kubelet[2525]: I0913 00:21:01.566423 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/de558ef9-7fd6-4be1-9192-d790b61757da-typha-certs\") pod \"calico-typha-7dbbf97c7f-fs8gj\" (UID: \"de558ef9-7fd6-4be1-9192-d790b61757da\") " pod="calico-system/calico-typha-7dbbf97c7f-fs8gj" Sep 13 00:21:01.804729 kubelet[2525]: E0913 00:21:01.803982 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:01.804934 containerd[1470]: time="2025-09-13T00:21:01.804814969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dbbf97c7f-fs8gj,Uid:de558ef9-7fd6-4be1-9192-d790b61757da,Namespace:calico-system,Attempt:0,}" Sep 13 00:21:01.831464 containerd[1470]: time="2025-09-13T00:21:01.831296715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:21:01.831464 containerd[1470]: time="2025-09-13T00:21:01.831421110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:21:01.831464 containerd[1470]: time="2025-09-13T00:21:01.831437972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:01.832486 containerd[1470]: time="2025-09-13T00:21:01.832428658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:01.849271 systemd[1]: Created slice kubepods-besteffort-pode9d3c3ab_df17_45d8_a5a7_d55467fe3011.slice - libcontainer container kubepods-besteffort-pode9d3c3ab_df17_45d8_a5a7_d55467fe3011.slice. Sep 13 00:21:01.865808 systemd[1]: Started cri-containerd-973e94936fba45c7b770bb63efb7439910d4f1993d9d56abb2468e3c214cebd4.scope - libcontainer container 973e94936fba45c7b770bb63efb7439910d4f1993d9d56abb2468e3c214cebd4. Sep 13 00:21:01.868550 kubelet[2525]: I0913 00:21:01.868396 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9d3c3ab-df17-45d8-a5a7-d55467fe3011-lib-modules\") pod \"calico-node-w28tn\" (UID: \"e9d3c3ab-df17-45d8-a5a7-d55467fe3011\") " pod="calico-system/calico-node-w28tn" Sep 13 00:21:01.868550 kubelet[2525]: I0913 00:21:01.868430 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9d3c3ab-df17-45d8-a5a7-d55467fe3011-xtables-lock\") pod \"calico-node-w28tn\" (UID: \"e9d3c3ab-df17-45d8-a5a7-d55467fe3011\") " pod="calico-system/calico-node-w28tn" Sep 13 00:21:01.868550 kubelet[2525]: I0913 00:21:01.868447 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e9d3c3ab-df17-45d8-a5a7-d55467fe3011-var-lib-calico\") pod \"calico-node-w28tn\" (UID: \"e9d3c3ab-df17-45d8-a5a7-d55467fe3011\") " pod="calico-system/calico-node-w28tn" Sep 13 00:21:01.868550 kubelet[2525]: I0913 00:21:01.868461 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e9d3c3ab-df17-45d8-a5a7-d55467fe3011-cni-log-dir\") pod \"calico-node-w28tn\" (UID: \"e9d3c3ab-df17-45d8-a5a7-d55467fe3011\") " pod="calico-system/calico-node-w28tn" Sep 13 00:21:01.868550 kubelet[2525]: I0913 00:21:01.868476 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e9d3c3ab-df17-45d8-a5a7-d55467fe3011-node-certs\") pod \"calico-node-w28tn\" (UID: \"e9d3c3ab-df17-45d8-a5a7-d55467fe3011\") " pod="calico-system/calico-node-w28tn" Sep 13 00:21:01.868894 kubelet[2525]: I0913 00:21:01.868490 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e9d3c3ab-df17-45d8-a5a7-d55467fe3011-policysync\") pod \"calico-node-w28tn\" (UID: \"e9d3c3ab-df17-45d8-a5a7-d55467fe3011\") " pod="calico-system/calico-node-w28tn" Sep 13 00:21:01.868894 kubelet[2525]: I0913 00:21:01.868506 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d3c3ab-df17-45d8-a5a7-d55467fe3011-tigera-ca-bundle\") pod \"calico-node-w28tn\" (UID: \"e9d3c3ab-df17-45d8-a5a7-d55467fe3011\") " pod="calico-system/calico-node-w28tn" Sep 13 00:21:01.868894 kubelet[2525]: I0913 00:21:01.868531 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e9d3c3ab-df17-45d8-a5a7-d55467fe3011-flexvol-driver-host\") pod \"calico-node-w28tn\" (UID: \"e9d3c3ab-df17-45d8-a5a7-d55467fe3011\") " pod="calico-system/calico-node-w28tn" Sep 13 00:21:01.868894 kubelet[2525]: I0913 00:21:01.868551 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf9fm\" (UniqueName: \"kubernetes.io/projected/e9d3c3ab-df17-45d8-a5a7-d55467fe3011-kube-api-access-mf9fm\") pod \"calico-node-w28tn\" (UID: \"e9d3c3ab-df17-45d8-a5a7-d55467fe3011\") " pod="calico-system/calico-node-w28tn" Sep 13 00:21:01.868894 kubelet[2525]: I0913 00:21:01.868569 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e9d3c3ab-df17-45d8-a5a7-d55467fe3011-var-run-calico\") pod \"calico-node-w28tn\" (UID: \"e9d3c3ab-df17-45d8-a5a7-d55467fe3011\") " pod="calico-system/calico-node-w28tn" Sep 13 00:21:01.869007 kubelet[2525]: I0913 00:21:01.868584 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e9d3c3ab-df17-45d8-a5a7-d55467fe3011-cni-bin-dir\") pod \"calico-node-w28tn\" (UID: \"e9d3c3ab-df17-45d8-a5a7-d55467fe3011\") " pod="calico-system/calico-node-w28tn" Sep 13 00:21:01.869007 kubelet[2525]: I0913 00:21:01.868597 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e9d3c3ab-df17-45d8-a5a7-d55467fe3011-cni-net-dir\") pod \"calico-node-w28tn\" (UID: \"e9d3c3ab-df17-45d8-a5a7-d55467fe3011\") " pod="calico-system/calico-node-w28tn" Sep 13 00:21:01.905336 containerd[1470]: time="2025-09-13T00:21:01.905095297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dbbf97c7f-fs8gj,Uid:de558ef9-7fd6-4be1-9192-d790b61757da,Namespace:calico-system,Attempt:0,} returns sandbox id \"973e94936fba45c7b770bb63efb7439910d4f1993d9d56abb2468e3c214cebd4\"" Sep 13 00:21:01.907185 kubelet[2525]: E0913 00:21:01.907158 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:01.908945 containerd[1470]: time="2025-09-13T00:21:01.908889373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:21:01.973475 kubelet[2525]: E0913 00:21:01.973379 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:01.973475 kubelet[2525]: W0913 00:21:01.973405 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:01.973475 kubelet[2525]: E0913 00:21:01.973443 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:01.977795 kubelet[2525]: E0913 00:21:01.975224 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:01.977795 kubelet[2525]: W0913 00:21:01.975244 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:01.977795 kubelet[2525]: E0913 00:21:01.975258 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:01.982536 kubelet[2525]: E0913 00:21:01.982450 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:01.982536 kubelet[2525]: W0913 00:21:01.982512 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:01.982536 kubelet[2525]: E0913 00:21:01.982532 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.159119 containerd[1470]: time="2025-09-13T00:21:02.158975580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w28tn,Uid:e9d3c3ab-df17-45d8-a5a7-d55467fe3011,Namespace:calico-system,Attempt:0,}" Sep 13 00:21:02.910206 kubelet[2525]: E0913 00:21:02.910140 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkpzj" podUID="fe5fc44c-61ad-4dd3-a615-acf708addb61" Sep 13 00:21:02.973053 kubelet[2525]: E0913 00:21:02.973013 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.973053 kubelet[2525]: W0913 00:21:02.973041 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.973222 kubelet[2525]: E0913 00:21:02.973068 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.973306 kubelet[2525]: E0913 00:21:02.973294 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.973306 kubelet[2525]: W0913 00:21:02.973304 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.973373 kubelet[2525]: E0913 00:21:02.973313 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.973518 kubelet[2525]: E0913 00:21:02.973497 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.973518 kubelet[2525]: W0913 00:21:02.973506 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.973518 kubelet[2525]: E0913 00:21:02.973515 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.973775 kubelet[2525]: E0913 00:21:02.973763 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.973775 kubelet[2525]: W0913 00:21:02.973772 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.973838 kubelet[2525]: E0913 00:21:02.973781 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.973985 kubelet[2525]: E0913 00:21:02.973974 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.973985 kubelet[2525]: W0913 00:21:02.973983 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.974029 kubelet[2525]: E0913 00:21:02.973990 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.974183 kubelet[2525]: E0913 00:21:02.974171 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.974183 kubelet[2525]: W0913 00:21:02.974180 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.974244 kubelet[2525]: E0913 00:21:02.974187 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.974368 kubelet[2525]: E0913 00:21:02.974357 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.974368 kubelet[2525]: W0913 00:21:02.974366 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.974411 kubelet[2525]: E0913 00:21:02.974373 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.974678 kubelet[2525]: E0913 00:21:02.974664 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.974678 kubelet[2525]: W0913 00:21:02.974675 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.974749 kubelet[2525]: E0913 00:21:02.974685 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.974902 kubelet[2525]: E0913 00:21:02.974890 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.974902 kubelet[2525]: W0913 00:21:02.974900 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.974946 kubelet[2525]: E0913 00:21:02.974908 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.975086 kubelet[2525]: E0913 00:21:02.975075 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.975086 kubelet[2525]: W0913 00:21:02.975083 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.975135 kubelet[2525]: E0913 00:21:02.975094 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.975299 kubelet[2525]: E0913 00:21:02.975288 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.975299 kubelet[2525]: W0913 00:21:02.975297 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.975352 kubelet[2525]: E0913 00:21:02.975305 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.975504 kubelet[2525]: E0913 00:21:02.975494 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.975504 kubelet[2525]: W0913 00:21:02.975502 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.975549 kubelet[2525]: E0913 00:21:02.975509 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.975732 kubelet[2525]: E0913 00:21:02.975720 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.975732 kubelet[2525]: W0913 00:21:02.975729 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.975786 kubelet[2525]: E0913 00:21:02.975737 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.975930 kubelet[2525]: E0913 00:21:02.975918 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.975930 kubelet[2525]: W0913 00:21:02.975927 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.975983 kubelet[2525]: E0913 00:21:02.975934 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.976106 kubelet[2525]: E0913 00:21:02.976095 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.976106 kubelet[2525]: W0913 00:21:02.976104 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.976153 kubelet[2525]: E0913 00:21:02.976112 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.976330 kubelet[2525]: E0913 00:21:02.976318 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.976330 kubelet[2525]: W0913 00:21:02.976327 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.976389 kubelet[2525]: E0913 00:21:02.976336 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.976533 kubelet[2525]: E0913 00:21:02.976522 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.976533 kubelet[2525]: W0913 00:21:02.976531 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.976577 kubelet[2525]: E0913 00:21:02.976540 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.976742 kubelet[2525]: E0913 00:21:02.976728 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.976742 kubelet[2525]: W0913 00:21:02.976739 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.976799 kubelet[2525]: E0913 00:21:02.976747 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.976961 kubelet[2525]: E0913 00:21:02.976947 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.976988 kubelet[2525]: W0913 00:21:02.976960 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.976988 kubelet[2525]: E0913 00:21:02.976971 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.977189 kubelet[2525]: E0913 00:21:02.977178 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.977189 kubelet[2525]: W0913 00:21:02.977187 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.977234 kubelet[2525]: E0913 00:21:02.977196 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.977526 kubelet[2525]: E0913 00:21:02.977506 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.977526 kubelet[2525]: W0913 00:21:02.977517 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.977572 kubelet[2525]: E0913 00:21:02.977525 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.977572 kubelet[2525]: I0913 00:21:02.977549 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe5fc44c-61ad-4dd3-a615-acf708addb61-socket-dir\") pod \"csi-node-driver-vkpzj\" (UID: \"fe5fc44c-61ad-4dd3-a615-acf708addb61\") " pod="calico-system/csi-node-driver-vkpzj" Sep 13 00:21:02.977795 kubelet[2525]: E0913 00:21:02.977774 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.977795 kubelet[2525]: W0913 00:21:02.977784 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.977795 kubelet[2525]: E0913 00:21:02.977794 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.977885 kubelet[2525]: I0913 00:21:02.977820 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbq9h\" (UniqueName: \"kubernetes.io/projected/fe5fc44c-61ad-4dd3-a615-acf708addb61-kube-api-access-dbq9h\") pod \"csi-node-driver-vkpzj\" (UID: \"fe5fc44c-61ad-4dd3-a615-acf708addb61\") " pod="calico-system/csi-node-driver-vkpzj" Sep 13 00:21:02.978174 kubelet[2525]: E0913 00:21:02.978140 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.978174 kubelet[2525]: W0913 00:21:02.978165 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.978224 kubelet[2525]: E0913 00:21:02.978184 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.978408 kubelet[2525]: E0913 00:21:02.978394 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.978408 kubelet[2525]: W0913 00:21:02.978404 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.978465 kubelet[2525]: E0913 00:21:02.978413 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.978663 kubelet[2525]: E0913 00:21:02.978645 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.978663 kubelet[2525]: W0913 00:21:02.978656 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.978663 kubelet[2525]: E0913 00:21:02.978664 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.978755 kubelet[2525]: I0913 00:21:02.978692 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe5fc44c-61ad-4dd3-a615-acf708addb61-registration-dir\") pod \"csi-node-driver-vkpzj\" (UID: \"fe5fc44c-61ad-4dd3-a615-acf708addb61\") " pod="calico-system/csi-node-driver-vkpzj" Sep 13 00:21:02.978915 kubelet[2525]: E0913 00:21:02.978898 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.978915 kubelet[2525]: W0913 00:21:02.978909 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.979026 kubelet[2525]: E0913 00:21:02.978918 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.979102 kubelet[2525]: E0913 00:21:02.979088 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.979102 kubelet[2525]: W0913 00:21:02.979097 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.979150 kubelet[2525]: E0913 00:21:02.979104 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.979295 kubelet[2525]: E0913 00:21:02.979281 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.979295 kubelet[2525]: W0913 00:21:02.979290 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.979341 kubelet[2525]: E0913 00:21:02.979298 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.979341 kubelet[2525]: I0913 00:21:02.979319 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe5fc44c-61ad-4dd3-a615-acf708addb61-kubelet-dir\") pod \"csi-node-driver-vkpzj\" (UID: \"fe5fc44c-61ad-4dd3-a615-acf708addb61\") " pod="calico-system/csi-node-driver-vkpzj" Sep 13 00:21:02.979533 kubelet[2525]: E0913 00:21:02.979517 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.979533 kubelet[2525]: W0913 00:21:02.979529 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.979582 kubelet[2525]: E0913 00:21:02.979539 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.979764 kubelet[2525]: E0913 00:21:02.979749 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.979764 kubelet[2525]: W0913 00:21:02.979761 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.979849 kubelet[2525]: E0913 00:21:02.979771 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.979983 kubelet[2525]: E0913 00:21:02.979970 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.979983 kubelet[2525]: W0913 00:21:02.979980 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.980024 kubelet[2525]: E0913 00:21:02.979988 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.980024 kubelet[2525]: I0913 00:21:02.980007 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fe5fc44c-61ad-4dd3-a615-acf708addb61-varrun\") pod \"csi-node-driver-vkpzj\" (UID: \"fe5fc44c-61ad-4dd3-a615-acf708addb61\") " pod="calico-system/csi-node-driver-vkpzj" Sep 13 00:21:02.980204 kubelet[2525]: E0913 00:21:02.980190 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.980204 kubelet[2525]: W0913 00:21:02.980201 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.980251 kubelet[2525]: E0913 00:21:02.980210 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.980379 kubelet[2525]: E0913 00:21:02.980369 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.980379 kubelet[2525]: W0913 00:21:02.980377 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.980419 kubelet[2525]: E0913 00:21:02.980384 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.980588 kubelet[2525]: E0913 00:21:02.980575 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.980588 kubelet[2525]: W0913 00:21:02.980584 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.980667 kubelet[2525]: E0913 00:21:02.980593 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:02.980896 kubelet[2525]: E0913 00:21:02.980880 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:02.980934 kubelet[2525]: W0913 00:21:02.980894 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:02.980934 kubelet[2525]: E0913 00:21:02.980906 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.081171 kubelet[2525]: E0913 00:21:03.081138 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.081171 kubelet[2525]: W0913 00:21:03.081157 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.081171 kubelet[2525]: E0913 00:21:03.081176 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.081462 kubelet[2525]: E0913 00:21:03.081445 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.081462 kubelet[2525]: W0913 00:21:03.081455 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.081526 kubelet[2525]: E0913 00:21:03.081464 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.081761 kubelet[2525]: E0913 00:21:03.081741 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.081761 kubelet[2525]: W0913 00:21:03.081755 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.081826 kubelet[2525]: E0913 00:21:03.081765 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.081984 kubelet[2525]: E0913 00:21:03.081967 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.081984 kubelet[2525]: W0913 00:21:03.081977 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.082027 kubelet[2525]: E0913 00:21:03.081985 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.082204 kubelet[2525]: E0913 00:21:03.082192 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.082204 kubelet[2525]: W0913 00:21:03.082201 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.082254 kubelet[2525]: E0913 00:21:03.082209 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.082568 kubelet[2525]: E0913 00:21:03.082536 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.082568 kubelet[2525]: W0913 00:21:03.082557 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.082636 kubelet[2525]: E0913 00:21:03.082575 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.082867 kubelet[2525]: E0913 00:21:03.082844 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.082867 kubelet[2525]: W0913 00:21:03.082857 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.082867 kubelet[2525]: E0913 00:21:03.082865 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.083087 kubelet[2525]: E0913 00:21:03.083073 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.083087 kubelet[2525]: W0913 00:21:03.083083 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.083203 kubelet[2525]: E0913 00:21:03.083092 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.083325 kubelet[2525]: E0913 00:21:03.083311 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.083325 kubelet[2525]: W0913 00:21:03.083322 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.083372 kubelet[2525]: E0913 00:21:03.083330 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.083550 kubelet[2525]: E0913 00:21:03.083536 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.083550 kubelet[2525]: W0913 00:21:03.083546 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.083605 kubelet[2525]: E0913 00:21:03.083555 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.083804 kubelet[2525]: E0913 00:21:03.083787 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.083804 kubelet[2525]: W0913 00:21:03.083802 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.083865 kubelet[2525]: E0913 00:21:03.083822 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.084034 kubelet[2525]: E0913 00:21:03.084021 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.084034 kubelet[2525]: W0913 00:21:03.084031 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.084092 kubelet[2525]: E0913 00:21:03.084039 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.084263 kubelet[2525]: E0913 00:21:03.084246 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.084263 kubelet[2525]: W0913 00:21:03.084259 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.084341 kubelet[2525]: E0913 00:21:03.084270 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.084499 kubelet[2525]: E0913 00:21:03.084484 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.084499 kubelet[2525]: W0913 00:21:03.084494 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.084549 kubelet[2525]: E0913 00:21:03.084503 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.084728 kubelet[2525]: E0913 00:21:03.084716 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.084728 kubelet[2525]: W0913 00:21:03.084725 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.084770 kubelet[2525]: E0913 00:21:03.084733 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.084983 kubelet[2525]: E0913 00:21:03.084970 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.084983 kubelet[2525]: W0913 00:21:03.084980 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.085034 kubelet[2525]: E0913 00:21:03.084988 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.085215 kubelet[2525]: E0913 00:21:03.085203 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.085215 kubelet[2525]: W0913 00:21:03.085213 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.085260 kubelet[2525]: E0913 00:21:03.085222 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.085425 kubelet[2525]: E0913 00:21:03.085413 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.085425 kubelet[2525]: W0913 00:21:03.085422 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.085464 kubelet[2525]: E0913 00:21:03.085431 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.085662 kubelet[2525]: E0913 00:21:03.085650 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.085662 kubelet[2525]: W0913 00:21:03.085659 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.085718 kubelet[2525]: E0913 00:21:03.085668 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.085896 kubelet[2525]: E0913 00:21:03.085883 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.085896 kubelet[2525]: W0913 00:21:03.085891 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.085950 kubelet[2525]: E0913 00:21:03.085900 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.086101 kubelet[2525]: E0913 00:21:03.086090 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.086101 kubelet[2525]: W0913 00:21:03.086098 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.086142 kubelet[2525]: E0913 00:21:03.086106 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.086314 kubelet[2525]: E0913 00:21:03.086300 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.086314 kubelet[2525]: W0913 00:21:03.086312 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.086362 kubelet[2525]: E0913 00:21:03.086322 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.086568 kubelet[2525]: E0913 00:21:03.086541 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.086568 kubelet[2525]: W0913 00:21:03.086551 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.086568 kubelet[2525]: E0913 00:21:03.086560 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.086940 kubelet[2525]: E0913 00:21:03.086923 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.086940 kubelet[2525]: W0913 00:21:03.086936 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.086986 kubelet[2525]: E0913 00:21:03.086947 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.087182 kubelet[2525]: E0913 00:21:03.087168 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.087182 kubelet[2525]: W0913 00:21:03.087180 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.087228 kubelet[2525]: E0913 00:21:03.087189 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:03.211904 containerd[1470]: time="2025-09-13T00:21:03.211093220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:21:03.211904 containerd[1470]: time="2025-09-13T00:21:03.211143344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:21:03.211904 containerd[1470]: time="2025-09-13T00:21:03.211157161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:03.211904 containerd[1470]: time="2025-09-13T00:21:03.211254735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:03.241777 systemd[1]: Started cri-containerd-b2f0af658290d65a3e501f471bd559edfb67f81bfb1486fe307a648ca37b5e50.scope - libcontainer container b2f0af658290d65a3e501f471bd559edfb67f81bfb1486fe307a648ca37b5e50. Sep 13 00:21:03.264062 containerd[1470]: time="2025-09-13T00:21:03.264011646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w28tn,Uid:e9d3c3ab-df17-45d8-a5a7-d55467fe3011,Namespace:calico-system,Attempt:0,} returns sandbox id \"b2f0af658290d65a3e501f471bd559edfb67f81bfb1486fe307a648ca37b5e50\"" Sep 13 00:21:03.297518 kubelet[2525]: E0913 00:21:03.297484 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:03.297518 kubelet[2525]: W0913 00:21:03.297509 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:03.297711 kubelet[2525]: E0913 00:21:03.297533 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:04.871947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559527821.mount: Deactivated successfully. Sep 13 00:21:05.256884 kubelet[2525]: E0913 00:21:05.256825 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkpzj" podUID="fe5fc44c-61ad-4dd3-a615-acf708addb61" Sep 13 00:21:05.881338 containerd[1470]: time="2025-09-13T00:21:05.881287667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:05.882088 containerd[1470]: time="2025-09-13T00:21:05.882052523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:21:05.883215 containerd[1470]: time="2025-09-13T00:21:05.883173363Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:05.885258 containerd[1470]: time="2025-09-13T00:21:05.885222160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:05.885915 containerd[1470]: time="2025-09-13T00:21:05.885869944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.976948191s" Sep 13 00:21:05.885915 containerd[1470]: time="2025-09-13T00:21:05.885912295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:21:05.886876 containerd[1470]: time="2025-09-13T00:21:05.886833037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:21:05.902255 containerd[1470]: time="2025-09-13T00:21:05.902202287Z" level=info msg="CreateContainer within sandbox \"973e94936fba45c7b770bb63efb7439910d4f1993d9d56abb2468e3c214cebd4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:21:05.916981 containerd[1470]: time="2025-09-13T00:21:05.916940975Z" level=info msg="CreateContainer within sandbox \"973e94936fba45c7b770bb63efb7439910d4f1993d9d56abb2468e3c214cebd4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2d1001d1fc706a2abd7efe876911446d7f08f185d45e892e4c5aa62f5ccc05ec\"" Sep 13 00:21:05.917487 containerd[1470]: time="2025-09-13T00:21:05.917457112Z" level=info msg="StartContainer for \"2d1001d1fc706a2abd7efe876911446d7f08f185d45e892e4c5aa62f5ccc05ec\"" Sep 13 00:21:05.968769 systemd[1]: Started cri-containerd-2d1001d1fc706a2abd7efe876911446d7f08f185d45e892e4c5aa62f5ccc05ec.scope - libcontainer container 2d1001d1fc706a2abd7efe876911446d7f08f185d45e892e4c5aa62f5ccc05ec. Sep 13 00:21:06.169439 containerd[1470]: time="2025-09-13T00:21:06.169176851Z" level=info msg="StartContainer for \"2d1001d1fc706a2abd7efe876911446d7f08f185d45e892e4c5aa62f5ccc05ec\" returns successfully" Sep 13 00:21:06.328582 kubelet[2525]: E0913 00:21:06.328427 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:06.351916 kubelet[2525]: I0913 00:21:06.351661 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7dbbf97c7f-fs8gj" podStartSLOduration=1.373586792 podStartE2EDuration="5.351643931s" podCreationTimestamp="2025-09-13 00:21:01 +0000 UTC" firstStartedPulling="2025-09-13 00:21:01.908489876 +0000 UTC m=+20.761931557" lastFinishedPulling="2025-09-13 00:21:05.886547005 +0000 UTC m=+24.739988696" observedRunningTime="2025-09-13 00:21:06.351563719 +0000 UTC m=+25.205005410" watchObservedRunningTime="2025-09-13 00:21:06.351643931 +0000 UTC m=+25.205085612" Sep 13 00:21:06.400163 kubelet[2525]: E0913 00:21:06.400119 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.400163 kubelet[2525]: W0913 00:21:06.400149 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.400163 kubelet[2525]: E0913 00:21:06.400173 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.400462 kubelet[2525]: E0913 00:21:06.400441 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.400462 kubelet[2525]: W0913 00:21:06.400454 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.400462 kubelet[2525]: E0913 00:21:06.400463 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.400759 kubelet[2525]: E0913 00:21:06.400730 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.400759 kubelet[2525]: W0913 00:21:06.400743 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.400759 kubelet[2525]: E0913 00:21:06.400755 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.401138 kubelet[2525]: E0913 00:21:06.401118 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.401138 kubelet[2525]: W0913 00:21:06.401130 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.401212 kubelet[2525]: E0913 00:21:06.401144 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.401439 kubelet[2525]: E0913 00:21:06.401399 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.401439 kubelet[2525]: W0913 00:21:06.401423 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.401439 kubelet[2525]: E0913 00:21:06.401432 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.401697 kubelet[2525]: E0913 00:21:06.401682 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.401697 kubelet[2525]: W0913 00:21:06.401693 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.401697 kubelet[2525]: E0913 00:21:06.401702 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.402041 kubelet[2525]: E0913 00:21:06.402009 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.402041 kubelet[2525]: W0913 00:21:06.402035 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.402113 kubelet[2525]: E0913 00:21:06.402098 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.402515 kubelet[2525]: E0913 00:21:06.402478 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.402515 kubelet[2525]: W0913 00:21:06.402492 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.402713 kubelet[2525]: E0913 00:21:06.402574 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.402970 kubelet[2525]: E0913 00:21:06.402953 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.402970 kubelet[2525]: W0913 00:21:06.402967 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.403040 kubelet[2525]: E0913 00:21:06.402978 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.403229 kubelet[2525]: E0913 00:21:06.403212 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.403229 kubelet[2525]: W0913 00:21:06.403222 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.403298 kubelet[2525]: E0913 00:21:06.403233 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.403472 kubelet[2525]: E0913 00:21:06.403458 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.403472 kubelet[2525]: W0913 00:21:06.403467 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.403542 kubelet[2525]: E0913 00:21:06.403476 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.403733 kubelet[2525]: E0913 00:21:06.403709 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.403733 kubelet[2525]: W0913 00:21:06.403721 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.403733 kubelet[2525]: E0913 00:21:06.403730 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.404008 kubelet[2525]: E0913 00:21:06.403993 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.404008 kubelet[2525]: W0913 00:21:06.404003 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.404080 kubelet[2525]: E0913 00:21:06.404013 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.404311 kubelet[2525]: E0913 00:21:06.404291 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.404311 kubelet[2525]: W0913 00:21:06.404305 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.404430 kubelet[2525]: E0913 00:21:06.404316 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.404690 kubelet[2525]: E0913 00:21:06.404585 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.404690 kubelet[2525]: W0913 00:21:06.404598 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.404690 kubelet[2525]: E0913 00:21:06.404610 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.408239 kubelet[2525]: E0913 00:21:06.408203 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.408239 kubelet[2525]: W0913 00:21:06.408227 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.408505 kubelet[2525]: E0913 00:21:06.408251 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.409304 kubelet[2525]: E0913 00:21:06.409089 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.409304 kubelet[2525]: W0913 00:21:06.409106 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.409304 kubelet[2525]: E0913 00:21:06.409116 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.409929 kubelet[2525]: E0913 00:21:06.409893 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.409929 kubelet[2525]: W0913 00:21:06.409921 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.410025 kubelet[2525]: E0913 00:21:06.409938 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.410413 kubelet[2525]: E0913 00:21:06.410385 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.410413 kubelet[2525]: W0913 00:21:06.410400 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.410413 kubelet[2525]: E0913 00:21:06.410410 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.410649 kubelet[2525]: E0913 00:21:06.410635 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.410689 kubelet[2525]: W0913 00:21:06.410658 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.410689 kubelet[2525]: E0913 00:21:06.410668 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.410921 kubelet[2525]: E0913 00:21:06.410901 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.410921 kubelet[2525]: W0913 00:21:06.410913 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.410921 kubelet[2525]: E0913 00:21:06.410923 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.411195 kubelet[2525]: E0913 00:21:06.411176 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.411195 kubelet[2525]: W0913 00:21:06.411192 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.411265 kubelet[2525]: E0913 00:21:06.411203 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.411500 kubelet[2525]: E0913 00:21:06.411465 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.411500 kubelet[2525]: W0913 00:21:06.411487 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.411673 kubelet[2525]: E0913 00:21:06.411510 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.411777 kubelet[2525]: E0913 00:21:06.411746 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.411777 kubelet[2525]: W0913 00:21:06.411758 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.411777 kubelet[2525]: E0913 00:21:06.411776 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.412278 kubelet[2525]: E0913 00:21:06.412252 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.412278 kubelet[2525]: W0913 00:21:06.412270 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.412349 kubelet[2525]: E0913 00:21:06.412283 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.412597 kubelet[2525]: E0913 00:21:06.412578 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.412597 kubelet[2525]: W0913 00:21:06.412591 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.412699 kubelet[2525]: E0913 00:21:06.412601 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.412959 kubelet[2525]: E0913 00:21:06.412938 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.413002 kubelet[2525]: W0913 00:21:06.412952 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.413002 kubelet[2525]: E0913 00:21:06.412993 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.413337 kubelet[2525]: E0913 00:21:06.413320 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.413337 kubelet[2525]: W0913 00:21:06.413332 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.413417 kubelet[2525]: E0913 00:21:06.413351 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.413971 kubelet[2525]: E0913 00:21:06.413944 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.413971 kubelet[2525]: W0913 00:21:06.413961 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.414062 kubelet[2525]: E0913 00:21:06.413976 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.414295 kubelet[2525]: E0913 00:21:06.414268 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.414338 kubelet[2525]: W0913 00:21:06.414287 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.414338 kubelet[2525]: E0913 00:21:06.414327 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.414764 kubelet[2525]: E0913 00:21:06.414740 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.414764 kubelet[2525]: W0913 00:21:06.414761 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.414861 kubelet[2525]: E0913 00:21:06.414786 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.415288 kubelet[2525]: E0913 00:21:06.415267 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.415288 kubelet[2525]: W0913 00:21:06.415285 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.415372 kubelet[2525]: E0913 00:21:06.415298 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:06.415632 kubelet[2525]: E0913 00:21:06.415587 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:06.415676 kubelet[2525]: W0913 00:21:06.415651 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:06.415676 kubelet[2525]: E0913 00:21:06.415668 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.267027 kubelet[2525]: E0913 00:21:07.266979 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkpzj" podUID="fe5fc44c-61ad-4dd3-a615-acf708addb61" Sep 13 00:21:07.330141 kubelet[2525]: I0913 00:21:07.330079 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:21:07.330700 kubelet[2525]: E0913 00:21:07.330477 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:07.411853 kubelet[2525]: E0913 00:21:07.411799 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.411853 kubelet[2525]: W0913 00:21:07.411827 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.411853 kubelet[2525]: E0913 00:21:07.411852 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.412182 kubelet[2525]: E0913 00:21:07.412162 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.412182 kubelet[2525]: W0913 00:21:07.412178 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.412266 kubelet[2525]: E0913 00:21:07.412194 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.412470 kubelet[2525]: E0913 00:21:07.412439 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.412470 kubelet[2525]: W0913 00:21:07.412454 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.412470 kubelet[2525]: E0913 00:21:07.412465 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.412737 kubelet[2525]: E0913 00:21:07.412719 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.412737 kubelet[2525]: W0913 00:21:07.412734 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.412831 kubelet[2525]: E0913 00:21:07.412745 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.413027 kubelet[2525]: E0913 00:21:07.413009 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.413027 kubelet[2525]: W0913 00:21:07.413023 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.413091 kubelet[2525]: E0913 00:21:07.413034 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.413292 kubelet[2525]: E0913 00:21:07.413273 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.413292 kubelet[2525]: W0913 00:21:07.413287 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.413387 kubelet[2525]: E0913 00:21:07.413298 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.413564 kubelet[2525]: E0913 00:21:07.413536 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.413564 kubelet[2525]: W0913 00:21:07.413551 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.413564 kubelet[2525]: E0913 00:21:07.413562 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.413824 kubelet[2525]: E0913 00:21:07.413806 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.413824 kubelet[2525]: W0913 00:21:07.413820 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.413879 kubelet[2525]: E0913 00:21:07.413831 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.414092 kubelet[2525]: E0913 00:21:07.414067 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.414092 kubelet[2525]: W0913 00:21:07.414083 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.414142 kubelet[2525]: E0913 00:21:07.414093 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.414366 kubelet[2525]: E0913 00:21:07.414347 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.414366 kubelet[2525]: W0913 00:21:07.414361 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.414433 kubelet[2525]: E0913 00:21:07.414372 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.414610 kubelet[2525]: E0913 00:21:07.414593 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.414610 kubelet[2525]: W0913 00:21:07.414606 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.414667 kubelet[2525]: E0913 00:21:07.414632 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.414892 kubelet[2525]: E0913 00:21:07.414876 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.414892 kubelet[2525]: W0913 00:21:07.414888 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.414951 kubelet[2525]: E0913 00:21:07.414897 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.415145 kubelet[2525]: E0913 00:21:07.415128 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.415145 kubelet[2525]: W0913 00:21:07.415142 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.415195 kubelet[2525]: E0913 00:21:07.415155 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.415362 kubelet[2525]: E0913 00:21:07.415345 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.415362 kubelet[2525]: W0913 00:21:07.415356 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.415362 kubelet[2525]: E0913 00:21:07.415363 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.415573 kubelet[2525]: E0913 00:21:07.415556 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.415573 kubelet[2525]: W0913 00:21:07.415570 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.415635 kubelet[2525]: E0913 00:21:07.415580 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.417924 kubelet[2525]: E0913 00:21:07.417868 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.417924 kubelet[2525]: W0913 00:21:07.417885 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.417924 kubelet[2525]: E0913 00:21:07.417898 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.418190 kubelet[2525]: E0913 00:21:07.418155 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.418190 kubelet[2525]: W0913 00:21:07.418165 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.418190 kubelet[2525]: E0913 00:21:07.418176 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.418464 kubelet[2525]: E0913 00:21:07.418436 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.418464 kubelet[2525]: W0913 00:21:07.418454 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.418464 kubelet[2525]: E0913 00:21:07.418466 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.418749 kubelet[2525]: E0913 00:21:07.418732 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.418749 kubelet[2525]: W0913 00:21:07.418745 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.418822 kubelet[2525]: E0913 00:21:07.418767 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.419019 kubelet[2525]: E0913 00:21:07.419001 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.419019 kubelet[2525]: W0913 00:21:07.419014 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.419075 kubelet[2525]: E0913 00:21:07.419024 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.419298 kubelet[2525]: E0913 00:21:07.419280 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.419298 kubelet[2525]: W0913 00:21:07.419295 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.419344 kubelet[2525]: E0913 00:21:07.419305 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.419655 kubelet[2525]: E0913 00:21:07.419598 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.419655 kubelet[2525]: W0913 00:21:07.419614 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.419718 kubelet[2525]: E0913 00:21:07.419665 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.419950 kubelet[2525]: E0913 00:21:07.419930 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.419950 kubelet[2525]: W0913 00:21:07.419946 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.420007 kubelet[2525]: E0913 00:21:07.419958 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.420205 kubelet[2525]: E0913 00:21:07.420189 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.420205 kubelet[2525]: W0913 00:21:07.420202 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.420257 kubelet[2525]: E0913 00:21:07.420213 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.420448 kubelet[2525]: E0913 00:21:07.420432 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.420448 kubelet[2525]: W0913 00:21:07.420445 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.420500 kubelet[2525]: E0913 00:21:07.420457 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.420735 kubelet[2525]: E0913 00:21:07.420716 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.420735 kubelet[2525]: W0913 00:21:07.420732 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.420823 kubelet[2525]: E0913 00:21:07.420743 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.421012 kubelet[2525]: E0913 00:21:07.420983 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.421012 kubelet[2525]: W0913 00:21:07.420998 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.421012 kubelet[2525]: E0913 00:21:07.421008 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.421251 kubelet[2525]: E0913 00:21:07.421235 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.421251 kubelet[2525]: W0913 00:21:07.421248 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.421305 kubelet[2525]: E0913 00:21:07.421261 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.421493 kubelet[2525]: E0913 00:21:07.421476 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.421493 kubelet[2525]: W0913 00:21:07.421490 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.421544 kubelet[2525]: E0913 00:21:07.421501 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.421742 kubelet[2525]: E0913 00:21:07.421726 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.421742 kubelet[2525]: W0913 00:21:07.421740 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.421812 kubelet[2525]: E0913 00:21:07.421751 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.421991 kubelet[2525]: E0913 00:21:07.421975 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.421991 kubelet[2525]: W0913 00:21:07.421989 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.422037 kubelet[2525]: E0913 00:21:07.422000 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.422263 kubelet[2525]: E0913 00:21:07.422246 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.422263 kubelet[2525]: W0913 00:21:07.422261 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.422324 kubelet[2525]: E0913 00:21:07.422272 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:07.422798 kubelet[2525]: E0913 00:21:07.422772 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:21:07.422798 kubelet[2525]: W0913 00:21:07.422790 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:21:07.422853 kubelet[2525]: E0913 00:21:07.422801 2525 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:21:08.075991 containerd[1470]: time="2025-09-13T00:21:08.075939073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:08.076964 containerd[1470]: time="2025-09-13T00:21:08.076917021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:21:08.078044 containerd[1470]: time="2025-09-13T00:21:08.078010268Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:08.080097 containerd[1470]: time="2025-09-13T00:21:08.080062326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:08.080887 containerd[1470]: time="2025-09-13T00:21:08.080839836Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.193966213s" Sep 13 00:21:08.080887 containerd[1470]: time="2025-09-13T00:21:08.080882837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:21:08.088129 containerd[1470]: time="2025-09-13T00:21:08.088097382Z" level=info msg="CreateContainer within sandbox \"b2f0af658290d65a3e501f471bd559edfb67f81bfb1486fe307a648ca37b5e50\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:21:08.107064 containerd[1470]: time="2025-09-13T00:21:08.107007002Z" level=info msg="CreateContainer within sandbox \"b2f0af658290d65a3e501f471bd559edfb67f81bfb1486fe307a648ca37b5e50\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8342dfa4d2778596689e9cee7907846a601d4b6739f54fa72e3502ac503db53b\"" Sep 13 00:21:08.107835 containerd[1470]: time="2025-09-13T00:21:08.107794160Z" level=info msg="StartContainer for \"8342dfa4d2778596689e9cee7907846a601d4b6739f54fa72e3502ac503db53b\"" Sep 13 00:21:08.153792 systemd[1]: Started cri-containerd-8342dfa4d2778596689e9cee7907846a601d4b6739f54fa72e3502ac503db53b.scope - libcontainer container 8342dfa4d2778596689e9cee7907846a601d4b6739f54fa72e3502ac503db53b. Sep 13 00:21:08.187983 containerd[1470]: time="2025-09-13T00:21:08.187936931Z" level=info msg="StartContainer for \"8342dfa4d2778596689e9cee7907846a601d4b6739f54fa72e3502ac503db53b\" returns successfully" Sep 13 00:21:08.198473 systemd[1]: cri-containerd-8342dfa4d2778596689e9cee7907846a601d4b6739f54fa72e3502ac503db53b.scope: Deactivated successfully. Sep 13 00:21:08.222452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8342dfa4d2778596689e9cee7907846a601d4b6739f54fa72e3502ac503db53b-rootfs.mount: Deactivated successfully. Sep 13 00:21:08.237031 containerd[1470]: time="2025-09-13T00:21:08.234661341Z" level=info msg="shim disconnected" id=8342dfa4d2778596689e9cee7907846a601d4b6739f54fa72e3502ac503db53b namespace=k8s.io Sep 13 00:21:08.237031 containerd[1470]: time="2025-09-13T00:21:08.237015761Z" level=warning msg="cleaning up after shim disconnected" id=8342dfa4d2778596689e9cee7907846a601d4b6739f54fa72e3502ac503db53b namespace=k8s.io Sep 13 00:21:08.237031 containerd[1470]: time="2025-09-13T00:21:08.237031230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:21:08.335517 containerd[1470]: time="2025-09-13T00:21:08.335394759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:21:09.257082 kubelet[2525]: E0913 00:21:09.257021 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkpzj" podUID="fe5fc44c-61ad-4dd3-a615-acf708addb61" Sep 13 00:21:11.257345 kubelet[2525]: E0913 00:21:11.257279 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkpzj" podUID="fe5fc44c-61ad-4dd3-a615-acf708addb61" Sep 13 00:21:11.883138 containerd[1470]: time="2025-09-13T00:21:11.883084203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:11.883968 containerd[1470]: time="2025-09-13T00:21:11.883928768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:21:11.885167 containerd[1470]: time="2025-09-13T00:21:11.885126290Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:11.887541 containerd[1470]: time="2025-09-13T00:21:11.887481929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:11.888047 containerd[1470]: time="2025-09-13T00:21:11.888018763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.552589197s" Sep 13 00:21:11.888087 containerd[1470]: time="2025-09-13T00:21:11.888044753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:21:11.892937 containerd[1470]: time="2025-09-13T00:21:11.892876629Z" level=info msg="CreateContainer within sandbox \"b2f0af658290d65a3e501f471bd559edfb67f81bfb1486fe307a648ca37b5e50\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:21:11.908338 containerd[1470]: time="2025-09-13T00:21:11.908293588Z" level=info msg="CreateContainer within sandbox \"b2f0af658290d65a3e501f471bd559edfb67f81bfb1486fe307a648ca37b5e50\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"01956fc66836a0cbb357c6494960a92dcd990ce813f8bffe78431958e54e7616\"" Sep 13 00:21:11.908802 containerd[1470]: time="2025-09-13T00:21:11.908771441Z" level=info msg="StartContainer for \"01956fc66836a0cbb357c6494960a92dcd990ce813f8bffe78431958e54e7616\"" Sep 13 00:21:11.944765 systemd[1]: Started cri-containerd-01956fc66836a0cbb357c6494960a92dcd990ce813f8bffe78431958e54e7616.scope - libcontainer container 01956fc66836a0cbb357c6494960a92dcd990ce813f8bffe78431958e54e7616. Sep 13 00:21:12.128498 containerd[1470]: time="2025-09-13T00:21:12.128441069Z" level=info msg="StartContainer for \"01956fc66836a0cbb357c6494960a92dcd990ce813f8bffe78431958e54e7616\" returns successfully" Sep 13 00:21:13.259697 kubelet[2525]: E0913 00:21:13.257725 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vkpzj" podUID="fe5fc44c-61ad-4dd3-a615-acf708addb61" Sep 13 00:21:13.385765 systemd[1]: cri-containerd-01956fc66836a0cbb357c6494960a92dcd990ce813f8bffe78431958e54e7616.scope: Deactivated successfully. Sep 13 00:21:13.409255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01956fc66836a0cbb357c6494960a92dcd990ce813f8bffe78431958e54e7616-rootfs.mount: Deactivated successfully. Sep 13 00:21:13.465343 kubelet[2525]: I0913 00:21:13.465285 2525 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:21:13.563754 containerd[1470]: time="2025-09-13T00:21:13.563488073Z" level=info msg="shim disconnected" id=01956fc66836a0cbb357c6494960a92dcd990ce813f8bffe78431958e54e7616 namespace=k8s.io Sep 13 00:21:13.563754 containerd[1470]: time="2025-09-13T00:21:13.563683281Z" level=warning msg="cleaning up after shim disconnected" id=01956fc66836a0cbb357c6494960a92dcd990ce813f8bffe78431958e54e7616 namespace=k8s.io Sep 13 00:21:13.563754 containerd[1470]: time="2025-09-13T00:21:13.563701726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:21:13.595801 systemd[1]: Created slice kubepods-burstable-podc31b50ac_aa0d_44d3_b1cb_a5fd228e8908.slice - libcontainer container kubepods-burstable-podc31b50ac_aa0d_44d3_b1cb_a5fd228e8908.slice. Sep 13 00:21:13.603032 containerd[1470]: time="2025-09-13T00:21:13.602704388Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:21:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:21:13.612800 systemd[1]: Created slice kubepods-besteffort-pod5904f0eb_a84b_4d3d_ba8a_eed68333160b.slice - libcontainer container kubepods-besteffort-pod5904f0eb_a84b_4d3d_ba8a_eed68333160b.slice. Sep 13 00:21:13.624791 systemd[1]: Created slice kubepods-burstable-pod4eabb29c_8307_4ea2_a6d0_81142535e33a.slice - libcontainer container kubepods-burstable-pod4eabb29c_8307_4ea2_a6d0_81142535e33a.slice. Sep 13 00:21:13.631370 systemd[1]: Created slice kubepods-besteffort-poda11b08e4_771c_4220_8d05_979228b63ed8.slice - libcontainer container kubepods-besteffort-poda11b08e4_771c_4220_8d05_979228b63ed8.slice. Sep 13 00:21:13.637848 systemd[1]: Created slice kubepods-besteffort-pod337e4386_9d96_4f61_84bf_9854d2b2501c.slice - libcontainer container kubepods-besteffort-pod337e4386_9d96_4f61_84bf_9854d2b2501c.slice. Sep 13 00:21:13.643096 systemd[1]: Created slice kubepods-besteffort-podd5330e44_0081_46ea_b521_bfc339b36095.slice - libcontainer container kubepods-besteffort-podd5330e44_0081_46ea_b521_bfc339b36095.slice. Sep 13 00:21:13.650318 systemd[1]: Created slice kubepods-besteffort-pod81467d75_5765_475d_8a7f_390251ac6b99.slice - libcontainer container kubepods-besteffort-pod81467d75_5765_475d_8a7f_390251ac6b99.slice. Sep 13 00:21:13.655983 kubelet[2525]: I0913 00:21:13.655933 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/337e4386-9d96-4f61-84bf-9854d2b2501c-goldmane-key-pair\") pod \"goldmane-54d579b49d-whpvq\" (UID: \"337e4386-9d96-4f61-84bf-9854d2b2501c\") " pod="calico-system/goldmane-54d579b49d-whpvq" Sep 13 00:21:13.655983 kubelet[2525]: I0913 00:21:13.655983 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5904f0eb-a84b-4d3d-ba8a-eed68333160b-calico-apiserver-certs\") pod \"calico-apiserver-8667466bbf-45njh\" (UID: \"5904f0eb-a84b-4d3d-ba8a-eed68333160b\") " pod="calico-apiserver/calico-apiserver-8667466bbf-45njh" Sep 13 00:21:13.656159 kubelet[2525]: I0913 00:21:13.656008 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c31b50ac-aa0d-44d3-b1cb-a5fd228e8908-config-volume\") pod \"coredns-674b8bbfcf-w2qff\" (UID: \"c31b50ac-aa0d-44d3-b1cb-a5fd228e8908\") " pod="kube-system/coredns-674b8bbfcf-w2qff" Sep 13 00:21:13.656159 kubelet[2525]: I0913 00:21:13.656029 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl8ft\" (UniqueName: \"kubernetes.io/projected/5904f0eb-a84b-4d3d-ba8a-eed68333160b-kube-api-access-cl8ft\") pod \"calico-apiserver-8667466bbf-45njh\" (UID: \"5904f0eb-a84b-4d3d-ba8a-eed68333160b\") " pod="calico-apiserver/calico-apiserver-8667466bbf-45njh" Sep 13 00:21:13.656159 kubelet[2525]: I0913 00:21:13.656051 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/337e4386-9d96-4f61-84bf-9854d2b2501c-config\") pod \"goldmane-54d579b49d-whpvq\" (UID: \"337e4386-9d96-4f61-84bf-9854d2b2501c\") " pod="calico-system/goldmane-54d579b49d-whpvq" Sep 13 00:21:13.656159 kubelet[2525]: I0913 00:21:13.656085 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/337e4386-9d96-4f61-84bf-9854d2b2501c-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-whpvq\" (UID: \"337e4386-9d96-4f61-84bf-9854d2b2501c\") " pod="calico-system/goldmane-54d579b49d-whpvq" Sep 13 00:21:13.656159 kubelet[2525]: I0913 00:21:13.656110 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5hqm\" (UniqueName: \"kubernetes.io/projected/c31b50ac-aa0d-44d3-b1cb-a5fd228e8908-kube-api-access-w5hqm\") pod \"coredns-674b8bbfcf-w2qff\" (UID: \"c31b50ac-aa0d-44d3-b1cb-a5fd228e8908\") " pod="kube-system/coredns-674b8bbfcf-w2qff" Sep 13 00:21:13.656308 kubelet[2525]: I0913 00:21:13.656133 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81467d75-5765-475d-8a7f-390251ac6b99-tigera-ca-bundle\") pod \"calico-kube-controllers-6f44cd849d-mzc4c\" (UID: \"81467d75-5765-475d-8a7f-390251ac6b99\") " pod="calico-system/calico-kube-controllers-6f44cd849d-mzc4c" Sep 13 00:21:13.656308 kubelet[2525]: I0913 00:21:13.656161 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7c4k\" (UniqueName: \"kubernetes.io/projected/4eabb29c-8307-4ea2-a6d0-81142535e33a-kube-api-access-l7c4k\") pod \"coredns-674b8bbfcf-f4k9l\" (UID: \"4eabb29c-8307-4ea2-a6d0-81142535e33a\") " pod="kube-system/coredns-674b8bbfcf-f4k9l" Sep 13 00:21:13.656308 kubelet[2525]: I0913 00:21:13.656184 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngj7m\" (UniqueName: \"kubernetes.io/projected/337e4386-9d96-4f61-84bf-9854d2b2501c-kube-api-access-ngj7m\") pod \"goldmane-54d579b49d-whpvq\" (UID: \"337e4386-9d96-4f61-84bf-9854d2b2501c\") " pod="calico-system/goldmane-54d579b49d-whpvq" Sep 13 00:21:13.656308 kubelet[2525]: I0913 00:21:13.656204 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2xzm\" (UniqueName: \"kubernetes.io/projected/d5330e44-0081-46ea-b521-bfc339b36095-kube-api-access-r2xzm\") pod \"calico-apiserver-8667466bbf-nvvbk\" (UID: \"d5330e44-0081-46ea-b521-bfc339b36095\") " pod="calico-apiserver/calico-apiserver-8667466bbf-nvvbk" Sep 13 00:21:13.656308 kubelet[2525]: I0913 00:21:13.656244 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d5330e44-0081-46ea-b521-bfc339b36095-calico-apiserver-certs\") pod \"calico-apiserver-8667466bbf-nvvbk\" (UID: \"d5330e44-0081-46ea-b521-bfc339b36095\") " pod="calico-apiserver/calico-apiserver-8667466bbf-nvvbk" Sep 13 00:21:13.656440 kubelet[2525]: I0913 00:21:13.656266 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a11b08e4-771c-4220-8d05-979228b63ed8-whisker-backend-key-pair\") pod \"whisker-d846d4f55-j7f97\" (UID: \"a11b08e4-771c-4220-8d05-979228b63ed8\") " pod="calico-system/whisker-d846d4f55-j7f97" Sep 13 00:21:13.656440 kubelet[2525]: I0913 00:21:13.656302 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh6rr\" (UniqueName: \"kubernetes.io/projected/a11b08e4-771c-4220-8d05-979228b63ed8-kube-api-access-qh6rr\") pod \"whisker-d846d4f55-j7f97\" (UID: \"a11b08e4-771c-4220-8d05-979228b63ed8\") " pod="calico-system/whisker-d846d4f55-j7f97" Sep 13 00:21:13.656440 kubelet[2525]: I0913 00:21:13.656326 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a11b08e4-771c-4220-8d05-979228b63ed8-whisker-ca-bundle\") pod \"whisker-d846d4f55-j7f97\" (UID: \"a11b08e4-771c-4220-8d05-979228b63ed8\") " pod="calico-system/whisker-d846d4f55-j7f97" Sep 13 00:21:13.656440 kubelet[2525]: I0913 00:21:13.656350 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4eabb29c-8307-4ea2-a6d0-81142535e33a-config-volume\") pod \"coredns-674b8bbfcf-f4k9l\" (UID: \"4eabb29c-8307-4ea2-a6d0-81142535e33a\") " pod="kube-system/coredns-674b8bbfcf-f4k9l" Sep 13 00:21:13.656440 kubelet[2525]: I0913 00:21:13.656373 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkwdt\" (UniqueName: \"kubernetes.io/projected/81467d75-5765-475d-8a7f-390251ac6b99-kube-api-access-lkwdt\") pod \"calico-kube-controllers-6f44cd849d-mzc4c\" (UID: \"81467d75-5765-475d-8a7f-390251ac6b99\") " pod="calico-system/calico-kube-controllers-6f44cd849d-mzc4c" Sep 13 00:21:13.904743 kubelet[2525]: E0913 00:21:13.904586 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:13.905298 containerd[1470]: time="2025-09-13T00:21:13.905248095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w2qff,Uid:c31b50ac-aa0d-44d3-b1cb-a5fd228e8908,Namespace:kube-system,Attempt:0,}" Sep 13 00:21:13.923271 containerd[1470]: time="2025-09-13T00:21:13.923227461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8667466bbf-45njh,Uid:5904f0eb-a84b-4d3d-ba8a-eed68333160b,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:21:13.928855 kubelet[2525]: E0913 00:21:13.928811 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:13.929315 containerd[1470]: time="2025-09-13T00:21:13.929281302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f4k9l,Uid:4eabb29c-8307-4ea2-a6d0-81142535e33a,Namespace:kube-system,Attempt:0,}" Sep 13 00:21:13.936813 containerd[1470]: time="2025-09-13T00:21:13.936764080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d846d4f55-j7f97,Uid:a11b08e4-771c-4220-8d05-979228b63ed8,Namespace:calico-system,Attempt:0,}" Sep 13 00:21:13.940901 containerd[1470]: time="2025-09-13T00:21:13.940848272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-whpvq,Uid:337e4386-9d96-4f61-84bf-9854d2b2501c,Namespace:calico-system,Attempt:0,}" Sep 13 00:21:13.946266 containerd[1470]: time="2025-09-13T00:21:13.946216768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8667466bbf-nvvbk,Uid:d5330e44-0081-46ea-b521-bfc339b36095,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:21:13.956857 containerd[1470]: time="2025-09-13T00:21:13.956826971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f44cd849d-mzc4c,Uid:81467d75-5765-475d-8a7f-390251ac6b99,Namespace:calico-system,Attempt:0,}" Sep 13 00:21:14.083665 containerd[1470]: time="2025-09-13T00:21:14.083559541Z" level=error msg="Failed to destroy network for sandbox \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.084754 containerd[1470]: time="2025-09-13T00:21:14.084707077Z" level=error msg="Failed to destroy network for sandbox \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.088265 containerd[1470]: time="2025-09-13T00:21:14.088221171Z" level=error msg="encountered an error cleaning up failed sandbox \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.088417 containerd[1470]: time="2025-09-13T00:21:14.088277828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8667466bbf-45njh,Uid:5904f0eb-a84b-4d3d-ba8a-eed68333160b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.088496 kubelet[2525]: E0913 00:21:14.088465 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.088710 kubelet[2525]: E0913 00:21:14.088529 2525 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8667466bbf-45njh" Sep 13 00:21:14.088710 kubelet[2525]: E0913 00:21:14.088550 2525 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8667466bbf-45njh" Sep 13 00:21:14.088710 kubelet[2525]: E0913 00:21:14.088597 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8667466bbf-45njh_calico-apiserver(5904f0eb-a84b-4d3d-ba8a-eed68333160b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8667466bbf-45njh_calico-apiserver(5904f0eb-a84b-4d3d-ba8a-eed68333160b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8667466bbf-45njh" podUID="5904f0eb-a84b-4d3d-ba8a-eed68333160b" Sep 13 00:21:14.129935 containerd[1470]: time="2025-09-13T00:21:14.129874695Z" level=error msg="Failed to destroy network for sandbox \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.130329 containerd[1470]: time="2025-09-13T00:21:14.130295689Z" level=error msg="encountered an error cleaning up failed sandbox \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.130364 containerd[1470]: time="2025-09-13T00:21:14.130349601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8667466bbf-nvvbk,Uid:d5330e44-0081-46ea-b521-bfc339b36095,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.130732 kubelet[2525]: E0913 00:21:14.130663 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.130732 kubelet[2525]: E0913 00:21:14.130737 2525 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8667466bbf-nvvbk" Sep 13 00:21:14.130907 kubelet[2525]: E0913 00:21:14.130764 2525 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8667466bbf-nvvbk" Sep 13 00:21:14.130907 kubelet[2525]: E0913 00:21:14.130834 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8667466bbf-nvvbk_calico-apiserver(d5330e44-0081-46ea-b521-bfc339b36095)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8667466bbf-nvvbk_calico-apiserver(d5330e44-0081-46ea-b521-bfc339b36095)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8667466bbf-nvvbk" podUID="d5330e44-0081-46ea-b521-bfc339b36095" Sep 13 00:21:14.159093 containerd[1470]: time="2025-09-13T00:21:14.158994397Z" level=error msg="encountered an error cleaning up failed sandbox \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.159093 containerd[1470]: time="2025-09-13T00:21:14.159046896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w2qff,Uid:c31b50ac-aa0d-44d3-b1cb-a5fd228e8908,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.159325 kubelet[2525]: E0913 00:21:14.159285 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.159392 kubelet[2525]: E0913 00:21:14.159360 2525 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w2qff" Sep 13 00:21:14.159545 kubelet[2525]: E0913 00:21:14.159414 2525 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w2qff" Sep 13 00:21:14.159545 kubelet[2525]: E0913 00:21:14.159491 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-w2qff_kube-system(c31b50ac-aa0d-44d3-b1cb-a5fd228e8908)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-w2qff_kube-system(c31b50ac-aa0d-44d3-b1cb-a5fd228e8908)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-w2qff" podUID="c31b50ac-aa0d-44d3-b1cb-a5fd228e8908" Sep 13 00:21:14.382997 kubelet[2525]: I0913 00:21:14.382631 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Sep 13 00:21:14.383792 containerd[1470]: time="2025-09-13T00:21:14.383586342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:21:14.385103 kubelet[2525]: I0913 00:21:14.385057 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Sep 13 00:21:14.388985 kubelet[2525]: I0913 00:21:14.386265 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Sep 13 00:21:14.389071 containerd[1470]: time="2025-09-13T00:21:14.387823360Z" level=info msg="StopPodSandbox for \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\"" Sep 13 00:21:14.389071 containerd[1470]: time="2025-09-13T00:21:14.388634161Z" level=info msg="StopPodSandbox for \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\"" Sep 13 00:21:14.389071 containerd[1470]: time="2025-09-13T00:21:14.388740702Z" level=info msg="StopPodSandbox for \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\"" Sep 13 00:21:14.390153 containerd[1470]: time="2025-09-13T00:21:14.389842091Z" level=info msg="Ensure that sandbox 2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306 in task-service has been cleanup successfully" Sep 13 00:21:14.390153 containerd[1470]: time="2025-09-13T00:21:14.389845237Z" level=info msg="Ensure that sandbox 0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde in task-service has been cleanup successfully" Sep 13 00:21:14.390285 containerd[1470]: time="2025-09-13T00:21:14.389857160Z" level=info msg="Ensure that sandbox 903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2 in task-service has been cleanup successfully" Sep 13 00:21:14.485242 containerd[1470]: time="2025-09-13T00:21:14.485187264Z" level=error msg="StopPodSandbox for \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\" failed" error="failed to destroy network for sandbox \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.485817 kubelet[2525]: E0913 00:21:14.485770 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Sep 13 00:21:14.485934 kubelet[2525]: E0913 00:21:14.485844 2525 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306"} Sep 13 00:21:14.485934 kubelet[2525]: E0913 00:21:14.485900 2525 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c31b50ac-aa0d-44d3-b1cb-a5fd228e8908\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:21:14.486112 kubelet[2525]: E0913 00:21:14.485932 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c31b50ac-aa0d-44d3-b1cb-a5fd228e8908\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-w2qff" podUID="c31b50ac-aa0d-44d3-b1cb-a5fd228e8908" Sep 13 00:21:14.489334 containerd[1470]: time="2025-09-13T00:21:14.489276133Z" level=error msg="StopPodSandbox for \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\" failed" error="failed to destroy network for sandbox \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.489884 kubelet[2525]: E0913 00:21:14.489830 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Sep 13 00:21:14.489950 kubelet[2525]: E0913 00:21:14.489892 2525 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2"} Sep 13 00:21:14.489950 kubelet[2525]: E0913 00:21:14.489919 2525 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d5330e44-0081-46ea-b521-bfc339b36095\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:21:14.490053 kubelet[2525]: E0913 00:21:14.489949 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d5330e44-0081-46ea-b521-bfc339b36095\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8667466bbf-nvvbk" podUID="d5330e44-0081-46ea-b521-bfc339b36095" Sep 13 00:21:14.494062 containerd[1470]: time="2025-09-13T00:21:14.493942272Z" level=error msg="StopPodSandbox for \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\" failed" error="failed to destroy network for sandbox \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.494326 kubelet[2525]: E0913 00:21:14.494260 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Sep 13 00:21:14.494386 kubelet[2525]: E0913 00:21:14.494342 2525 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde"} Sep 13 00:21:14.494386 kubelet[2525]: E0913 00:21:14.494386 2525 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5904f0eb-a84b-4d3d-ba8a-eed68333160b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:21:14.494525 kubelet[2525]: E0913 00:21:14.494417 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5904f0eb-a84b-4d3d-ba8a-eed68333160b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8667466bbf-45njh" podUID="5904f0eb-a84b-4d3d-ba8a-eed68333160b" Sep 13 00:21:14.540908 containerd[1470]: time="2025-09-13T00:21:14.540784691Z" level=error msg="Failed to destroy network for sandbox \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.541600 containerd[1470]: time="2025-09-13T00:21:14.541572919Z" level=error msg="encountered an error cleaning up failed sandbox \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.541782 containerd[1470]: time="2025-09-13T00:21:14.541729725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d846d4f55-j7f97,Uid:a11b08e4-771c-4220-8d05-979228b63ed8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.542123 kubelet[2525]: E0913 00:21:14.542072 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.542186 kubelet[2525]: E0913 00:21:14.542151 2525 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d846d4f55-j7f97" Sep 13 00:21:14.542186 kubelet[2525]: E0913 00:21:14.542174 2525 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d846d4f55-j7f97" Sep 13 00:21:14.542310 kubelet[2525]: E0913 00:21:14.542254 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d846d4f55-j7f97_calico-system(a11b08e4-771c-4220-8d05-979228b63ed8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d846d4f55-j7f97_calico-system(a11b08e4-771c-4220-8d05-979228b63ed8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d846d4f55-j7f97" podUID="a11b08e4-771c-4220-8d05-979228b63ed8" Sep 13 00:21:14.543902 containerd[1470]: time="2025-09-13T00:21:14.543831503Z" level=error msg="Failed to destroy network for sandbox \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.544436 containerd[1470]: time="2025-09-13T00:21:14.544389687Z" level=error msg="encountered an error cleaning up failed sandbox \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.544480 containerd[1470]: time="2025-09-13T00:21:14.544455621Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-whpvq,Uid:337e4386-9d96-4f61-84bf-9854d2b2501c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.544845 kubelet[2525]: E0913 00:21:14.544780 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.544906 kubelet[2525]: E0913 00:21:14.544872 2525 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-whpvq" Sep 13 00:21:14.544906 kubelet[2525]: E0913 00:21:14.544897 2525 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-whpvq" Sep 13 00:21:14.545012 kubelet[2525]: E0913 00:21:14.544957 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-whpvq_calico-system(337e4386-9d96-4f61-84bf-9854d2b2501c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-whpvq_calico-system(337e4386-9d96-4f61-84bf-9854d2b2501c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-whpvq" podUID="337e4386-9d96-4f61-84bf-9854d2b2501c" Sep 13 00:21:14.545738 containerd[1470]: time="2025-09-13T00:21:14.545711742Z" level=error msg="Failed to destroy network for sandbox \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.546301 containerd[1470]: time="2025-09-13T00:21:14.546126165Z" level=error msg="encountered an error cleaning up failed sandbox \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.546301 containerd[1470]: time="2025-09-13T00:21:14.546183503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f44cd849d-mzc4c,Uid:81467d75-5765-475d-8a7f-390251ac6b99,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.546383 kubelet[2525]: E0913 00:21:14.546332 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.546383 kubelet[2525]: E0913 00:21:14.546367 2525 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f44cd849d-mzc4c" Sep 13 00:21:14.546483 kubelet[2525]: E0913 00:21:14.546395 2525 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f44cd849d-mzc4c" Sep 13 00:21:14.546575 kubelet[2525]: E0913 00:21:14.546539 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f44cd849d-mzc4c_calico-system(81467d75-5765-475d-8a7f-390251ac6b99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f44cd849d-mzc4c_calico-system(81467d75-5765-475d-8a7f-390251ac6b99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f44cd849d-mzc4c" podUID="81467d75-5765-475d-8a7f-390251ac6b99" Sep 13 00:21:14.554202 containerd[1470]: time="2025-09-13T00:21:14.554145093Z" level=error msg="Failed to destroy network for sandbox \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.554631 containerd[1470]: time="2025-09-13T00:21:14.554579553Z" level=error msg="encountered an error cleaning up failed sandbox \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.554687 containerd[1470]: time="2025-09-13T00:21:14.554654174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f4k9l,Uid:4eabb29c-8307-4ea2-a6d0-81142535e33a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.554856 kubelet[2525]: E0913 00:21:14.554804 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:14.554856 kubelet[2525]: E0913 00:21:14.554849 2525 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f4k9l" Sep 13 00:21:14.554948 kubelet[2525]: E0913 00:21:14.554865 2525 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f4k9l" Sep 13 00:21:14.554948 kubelet[2525]: E0913 00:21:14.554921 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-f4k9l_kube-system(4eabb29c-8307-4ea2-a6d0-81142535e33a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-f4k9l_kube-system(4eabb29c-8307-4ea2-a6d0-81142535e33a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-f4k9l" podUID="4eabb29c-8307-4ea2-a6d0-81142535e33a" Sep 13 00:21:15.268189 systemd[1]: Created slice kubepods-besteffort-podfe5fc44c_61ad_4dd3_a615_acf708addb61.slice - libcontainer container kubepods-besteffort-podfe5fc44c_61ad_4dd3_a615_acf708addb61.slice. Sep 13 00:21:15.271290 containerd[1470]: time="2025-09-13T00:21:15.270867323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkpzj,Uid:fe5fc44c-61ad-4dd3-a615-acf708addb61,Namespace:calico-system,Attempt:0,}" Sep 13 00:21:15.330464 containerd[1470]: time="2025-09-13T00:21:15.330385373Z" level=error msg="Failed to destroy network for sandbox \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:15.330900 containerd[1470]: time="2025-09-13T00:21:15.330864738Z" level=error msg="encountered an error cleaning up failed sandbox \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:15.330958 containerd[1470]: time="2025-09-13T00:21:15.330917798Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkpzj,Uid:fe5fc44c-61ad-4dd3-a615-acf708addb61,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:15.331239 kubelet[2525]: E0913 00:21:15.331166 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:15.331292 kubelet[2525]: E0913 00:21:15.331264 2525 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vkpzj" Sep 13 00:21:15.331321 kubelet[2525]: E0913 00:21:15.331289 2525 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vkpzj" Sep 13 00:21:15.331402 kubelet[2525]: E0913 00:21:15.331355 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vkpzj_calico-system(fe5fc44c-61ad-4dd3-a615-acf708addb61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vkpzj_calico-system(fe5fc44c-61ad-4dd3-a615-acf708addb61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vkpzj" podUID="fe5fc44c-61ad-4dd3-a615-acf708addb61" Sep 13 00:21:15.389180 kubelet[2525]: I0913 00:21:15.389140 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Sep 13 00:21:15.390128 containerd[1470]: time="2025-09-13T00:21:15.390061242Z" level=info msg="StopPodSandbox for \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\"" Sep 13 00:21:15.390380 containerd[1470]: time="2025-09-13T00:21:15.390252763Z" level=info msg="Ensure that sandbox 1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a in task-service has been cleanup successfully" Sep 13 00:21:15.391873 kubelet[2525]: I0913 00:21:15.391054 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Sep 13 00:21:15.392467 containerd[1470]: time="2025-09-13T00:21:15.391955868Z" level=info msg="StopPodSandbox for \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\"" Sep 13 00:21:15.392467 containerd[1470]: time="2025-09-13T00:21:15.392180833Z" level=info msg="Ensure that sandbox 0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3 in task-service has been cleanup successfully" Sep 13 00:21:15.393866 kubelet[2525]: I0913 00:21:15.393292 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Sep 13 00:21:15.394566 containerd[1470]: time="2025-09-13T00:21:15.394411924Z" level=info msg="StopPodSandbox for \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\"" Sep 13 00:21:15.394831 containerd[1470]: time="2025-09-13T00:21:15.394610518Z" level=info msg="Ensure that sandbox 2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d in task-service has been cleanup successfully" Sep 13 00:21:15.395554 kubelet[2525]: I0913 00:21:15.395302 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Sep 13 00:21:15.396094 containerd[1470]: time="2025-09-13T00:21:15.396054905Z" level=info msg="StopPodSandbox for \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\"" Sep 13 00:21:15.396351 containerd[1470]: time="2025-09-13T00:21:15.396235706Z" level=info msg="Ensure that sandbox 849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b in task-service has been cleanup successfully" Sep 13 00:21:15.398319 kubelet[2525]: I0913 00:21:15.398272 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Sep 13 00:21:15.399718 containerd[1470]: time="2025-09-13T00:21:15.399049457Z" level=info msg="StopPodSandbox for \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\"" Sep 13 00:21:15.399718 containerd[1470]: time="2025-09-13T00:21:15.399383387Z" level=info msg="Ensure that sandbox 43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569 in task-service has been cleanup successfully" Sep 13 00:21:15.411315 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d-shm.mount: Deactivated successfully. Sep 13 00:21:15.411439 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b-shm.mount: Deactivated successfully. Sep 13 00:21:15.411518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a-shm.mount: Deactivated successfully. Sep 13 00:21:15.438325 containerd[1470]: time="2025-09-13T00:21:15.438170719Z" level=error msg="StopPodSandbox for \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\" failed" error="failed to destroy network for sandbox \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:15.438539 kubelet[2525]: E0913 00:21:15.438492 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Sep 13 00:21:15.438607 kubelet[2525]: E0913 00:21:15.438554 2525 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d"} Sep 13 00:21:15.438607 kubelet[2525]: E0913 00:21:15.438592 2525 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"81467d75-5765-475d-8a7f-390251ac6b99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:21:15.438865 kubelet[2525]: E0913 00:21:15.438636 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"81467d75-5765-475d-8a7f-390251ac6b99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f44cd849d-mzc4c" podUID="81467d75-5765-475d-8a7f-390251ac6b99" Sep 13 00:21:15.449472 containerd[1470]: time="2025-09-13T00:21:15.449417147Z" level=error msg="StopPodSandbox for \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\" failed" error="failed to destroy network for sandbox \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:15.449784 containerd[1470]: time="2025-09-13T00:21:15.449760826Z" level=error msg="StopPodSandbox for \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\" failed" error="failed to destroy network for sandbox \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:15.450037 kubelet[2525]: E0913 00:21:15.450002 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Sep 13 00:21:15.450095 kubelet[2525]: E0913 00:21:15.450046 2525 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a"} Sep 13 00:21:15.450095 kubelet[2525]: E0913 00:21:15.450073 2525 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"337e4386-9d96-4f61-84bf-9854d2b2501c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:21:15.450185 kubelet[2525]: E0913 00:21:15.450092 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"337e4386-9d96-4f61-84bf-9854d2b2501c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-whpvq" podUID="337e4386-9d96-4f61-84bf-9854d2b2501c" Sep 13 00:21:15.450185 kubelet[2525]: E0913 00:21:15.450118 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Sep 13 00:21:15.450185 kubelet[2525]: E0913 00:21:15.450133 2525 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569"} Sep 13 00:21:15.450185 kubelet[2525]: E0913 00:21:15.450148 2525 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4eabb29c-8307-4ea2-a6d0-81142535e33a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:21:15.450313 kubelet[2525]: E0913 00:21:15.450166 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4eabb29c-8307-4ea2-a6d0-81142535e33a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-f4k9l" podUID="4eabb29c-8307-4ea2-a6d0-81142535e33a" Sep 13 00:21:15.452558 containerd[1470]: time="2025-09-13T00:21:15.452504945Z" level=error msg="StopPodSandbox for \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\" failed" error="failed to destroy network for sandbox \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:15.452972 kubelet[2525]: E0913 00:21:15.452906 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Sep 13 00:21:15.452972 kubelet[2525]: E0913 00:21:15.452975 2525 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3"} Sep 13 00:21:15.453165 kubelet[2525]: E0913 00:21:15.453014 2525 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe5fc44c-61ad-4dd3-a615-acf708addb61\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:21:15.453165 kubelet[2525]: E0913 00:21:15.453064 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe5fc44c-61ad-4dd3-a615-acf708addb61\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vkpzj" podUID="fe5fc44c-61ad-4dd3-a615-acf708addb61" Sep 13 00:21:15.461167 containerd[1470]: time="2025-09-13T00:21:15.461112462Z" level=error msg="StopPodSandbox for \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\" failed" error="failed to destroy network for sandbox \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:21:15.461381 kubelet[2525]: E0913 00:21:15.461322 2525 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Sep 13 00:21:15.461381 kubelet[2525]: E0913 00:21:15.461375 2525 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b"} Sep 13 00:21:15.461571 kubelet[2525]: E0913 00:21:15.461398 2525 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a11b08e4-771c-4220-8d05-979228b63ed8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:21:15.461571 kubelet[2525]: E0913 00:21:15.461423 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a11b08e4-771c-4220-8d05-979228b63ed8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d846d4f55-j7f97" podUID="a11b08e4-771c-4220-8d05-979228b63ed8" Sep 13 00:21:19.132719 kubelet[2525]: I0913 00:21:19.132638 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:21:19.133403 kubelet[2525]: E0913 00:21:19.133097 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:19.410740 kubelet[2525]: E0913 00:21:19.410594 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:20.246036 systemd[1]: Started sshd@7-10.0.0.7:22-10.0.0.1:55760.service - OpenSSH per-connection server daemon (10.0.0.1:55760). Sep 13 00:21:20.298400 sshd[3799]: Accepted publickey for core from 10.0.0.1 port 55760 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:21:20.300497 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:21:20.306784 systemd-logind[1449]: New session 8 of user core. Sep 13 00:21:20.312907 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:21:20.491115 sshd[3799]: pam_unix(sshd:session): session closed for user core Sep 13 00:21:20.496881 systemd[1]: sshd@7-10.0.0.7:22-10.0.0.1:55760.service: Deactivated successfully. Sep 13 00:21:20.501150 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:21:20.502793 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:21:20.504490 systemd-logind[1449]: Removed session 8. Sep 13 00:21:21.850154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1532299858.mount: Deactivated successfully. Sep 13 00:21:24.415112 containerd[1470]: time="2025-09-13T00:21:24.415028250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:24.522422 containerd[1470]: time="2025-09-13T00:21:24.522337910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:21:24.529829 containerd[1470]: time="2025-09-13T00:21:24.529774823Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:24.532540 containerd[1470]: time="2025-09-13T00:21:24.532480876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:24.533147 containerd[1470]: time="2025-09-13T00:21:24.533109901Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.14946601s" Sep 13 00:21:24.533147 containerd[1470]: time="2025-09-13T00:21:24.533144657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:21:24.547912 containerd[1470]: time="2025-09-13T00:21:24.547865043Z" level=info msg="CreateContainer within sandbox \"b2f0af658290d65a3e501f471bd559edfb67f81bfb1486fe307a648ca37b5e50\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:21:24.571031 containerd[1470]: time="2025-09-13T00:21:24.570962926Z" level=info msg="CreateContainer within sandbox \"b2f0af658290d65a3e501f471bd559edfb67f81bfb1486fe307a648ca37b5e50\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"67e1c08797d3c794a92b267a8ddb812e88208ae45ef2c201e372304eea6193a0\"" Sep 13 00:21:24.571567 containerd[1470]: time="2025-09-13T00:21:24.571542239Z" level=info msg="StartContainer for \"67e1c08797d3c794a92b267a8ddb812e88208ae45ef2c201e372304eea6193a0\"" Sep 13 00:21:24.639865 systemd[1]: Started cri-containerd-67e1c08797d3c794a92b267a8ddb812e88208ae45ef2c201e372304eea6193a0.scope - libcontainer container 67e1c08797d3c794a92b267a8ddb812e88208ae45ef2c201e372304eea6193a0. Sep 13 00:21:24.951870 containerd[1470]: time="2025-09-13T00:21:24.951790393Z" level=info msg="StartContainer for \"67e1c08797d3c794a92b267a8ddb812e88208ae45ef2c201e372304eea6193a0\" returns successfully" Sep 13 00:21:24.990043 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:21:24.990172 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:21:25.071022 containerd[1470]: time="2025-09-13T00:21:25.070965906Z" level=info msg="StopPodSandbox for \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\"" Sep 13 00:21:25.228945 containerd[1470]: 2025-09-13 00:21:25.135 [INFO][3877] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Sep 13 00:21:25.228945 containerd[1470]: 2025-09-13 00:21:25.135 [INFO][3877] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" iface="eth0" netns="/var/run/netns/cni-30dc6572-5a19-cf56-9a6e-76099b4fb046" Sep 13 00:21:25.228945 containerd[1470]: 2025-09-13 00:21:25.136 [INFO][3877] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" iface="eth0" netns="/var/run/netns/cni-30dc6572-5a19-cf56-9a6e-76099b4fb046" Sep 13 00:21:25.228945 containerd[1470]: 2025-09-13 00:21:25.136 [INFO][3877] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" iface="eth0" netns="/var/run/netns/cni-30dc6572-5a19-cf56-9a6e-76099b4fb046" Sep 13 00:21:25.228945 containerd[1470]: 2025-09-13 00:21:25.138 [INFO][3877] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Sep 13 00:21:25.228945 containerd[1470]: 2025-09-13 00:21:25.138 [INFO][3877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Sep 13 00:21:25.228945 containerd[1470]: 2025-09-13 00:21:25.212 [INFO][3887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" HandleID="k8s-pod-network.849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Workload="localhost-k8s-whisker--d846d4f55--j7f97-eth0" Sep 13 00:21:25.228945 containerd[1470]: 2025-09-13 00:21:25.213 [INFO][3887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:25.228945 containerd[1470]: 2025-09-13 00:21:25.214 [INFO][3887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:25.228945 containerd[1470]: 2025-09-13 00:21:25.219 [WARNING][3887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" HandleID="k8s-pod-network.849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Workload="localhost-k8s-whisker--d846d4f55--j7f97-eth0" Sep 13 00:21:25.228945 containerd[1470]: 2025-09-13 00:21:25.219 [INFO][3887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" HandleID="k8s-pod-network.849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Workload="localhost-k8s-whisker--d846d4f55--j7f97-eth0" Sep 13 00:21:25.228945 containerd[1470]: 2025-09-13 00:21:25.221 [INFO][3887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:25.228945 containerd[1470]: 2025-09-13 00:21:25.224 [INFO][3877] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Sep 13 00:21:25.229449 containerd[1470]: time="2025-09-13T00:21:25.229232093Z" level=info msg="TearDown network for sandbox \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\" successfully" Sep 13 00:21:25.229449 containerd[1470]: time="2025-09-13T00:21:25.229262942Z" level=info msg="StopPodSandbox for \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\" returns successfully" Sep 13 00:21:25.257220 containerd[1470]: time="2025-09-13T00:21:25.257157635Z" level=info msg="StopPodSandbox for \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\"" Sep 13 00:21:25.335635 kubelet[2525]: I0913 00:21:25.335579 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a11b08e4-771c-4220-8d05-979228b63ed8-whisker-ca-bundle\") pod \"a11b08e4-771c-4220-8d05-979228b63ed8\" (UID: \"a11b08e4-771c-4220-8d05-979228b63ed8\") " Sep 13 00:21:25.336126 kubelet[2525]: I0913 00:21:25.335656 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh6rr\" (UniqueName: \"kubernetes.io/projected/a11b08e4-771c-4220-8d05-979228b63ed8-kube-api-access-qh6rr\") pod \"a11b08e4-771c-4220-8d05-979228b63ed8\" (UID: \"a11b08e4-771c-4220-8d05-979228b63ed8\") " Sep 13 00:21:25.336126 kubelet[2525]: I0913 00:21:25.335677 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a11b08e4-771c-4220-8d05-979228b63ed8-whisker-backend-key-pair\") pod \"a11b08e4-771c-4220-8d05-979228b63ed8\" (UID: \"a11b08e4-771c-4220-8d05-979228b63ed8\") " Sep 13 00:21:25.337544 kubelet[2525]: I0913 00:21:25.337499 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a11b08e4-771c-4220-8d05-979228b63ed8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a11b08e4-771c-4220-8d05-979228b63ed8" (UID: "a11b08e4-771c-4220-8d05-979228b63ed8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:21:25.340518 kubelet[2525]: I0913 00:21:25.340489 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a11b08e4-771c-4220-8d05-979228b63ed8-kube-api-access-qh6rr" (OuterVolumeSpecName: "kube-api-access-qh6rr") pod "a11b08e4-771c-4220-8d05-979228b63ed8" (UID: "a11b08e4-771c-4220-8d05-979228b63ed8"). InnerVolumeSpecName "kube-api-access-qh6rr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:21:25.340575 containerd[1470]: 2025-09-13 00:21:25.302 [INFO][3916] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Sep 13 00:21:25.340575 containerd[1470]: 2025-09-13 00:21:25.303 [INFO][3916] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" iface="eth0" netns="/var/run/netns/cni-30d60d5b-cbb1-75a8-9004-2b50b20452ce" Sep 13 00:21:25.340575 containerd[1470]: 2025-09-13 00:21:25.303 [INFO][3916] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" iface="eth0" netns="/var/run/netns/cni-30d60d5b-cbb1-75a8-9004-2b50b20452ce" Sep 13 00:21:25.340575 containerd[1470]: 2025-09-13 00:21:25.303 [INFO][3916] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" iface="eth0" netns="/var/run/netns/cni-30d60d5b-cbb1-75a8-9004-2b50b20452ce" Sep 13 00:21:25.340575 containerd[1470]: 2025-09-13 00:21:25.303 [INFO][3916] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Sep 13 00:21:25.340575 containerd[1470]: 2025-09-13 00:21:25.303 [INFO][3916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Sep 13 00:21:25.340575 containerd[1470]: 2025-09-13 00:21:25.325 [INFO][3927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" HandleID="k8s-pod-network.2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Workload="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:25.340575 containerd[1470]: 2025-09-13 00:21:25.325 [INFO][3927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:25.340575 containerd[1470]: 2025-09-13 00:21:25.325 [INFO][3927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:25.340575 containerd[1470]: 2025-09-13 00:21:25.331 [WARNING][3927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" HandleID="k8s-pod-network.2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Workload="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:25.340575 containerd[1470]: 2025-09-13 00:21:25.331 [INFO][3927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" HandleID="k8s-pod-network.2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Workload="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:25.340575 containerd[1470]: 2025-09-13 00:21:25.333 [INFO][3927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:25.340575 containerd[1470]: 2025-09-13 00:21:25.336 [INFO][3916] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Sep 13 00:21:25.341035 containerd[1470]: time="2025-09-13T00:21:25.340736386Z" level=info msg="TearDown network for sandbox \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\" successfully" Sep 13 00:21:25.341035 containerd[1470]: time="2025-09-13T00:21:25.340777634Z" level=info msg="StopPodSandbox for \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\" returns successfully" Sep 13 00:21:25.341287 kubelet[2525]: I0913 00:21:25.340769 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a11b08e4-771c-4220-8d05-979228b63ed8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a11b08e4-771c-4220-8d05-979228b63ed8" (UID: "a11b08e4-771c-4220-8d05-979228b63ed8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:21:25.341287 kubelet[2525]: E0913 00:21:25.341104 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:25.341556 containerd[1470]: time="2025-09-13T00:21:25.341523019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w2qff,Uid:c31b50ac-aa0d-44d3-b1cb-a5fd228e8908,Namespace:kube-system,Attempt:1,}" Sep 13 00:21:25.436957 systemd[1]: Removed slice kubepods-besteffort-poda11b08e4_771c_4220_8d05_979228b63ed8.slice - libcontainer container kubepods-besteffort-poda11b08e4_771c_4220_8d05_979228b63ed8.slice. Sep 13 00:21:25.437595 kubelet[2525]: I0913 00:21:25.437315 2525 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qh6rr\" (UniqueName: \"kubernetes.io/projected/a11b08e4-771c-4220-8d05-979228b63ed8-kube-api-access-qh6rr\") on node \"localhost\" DevicePath \"\"" Sep 13 00:21:25.437682 kubelet[2525]: I0913 00:21:25.437640 2525 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a11b08e4-771c-4220-8d05-979228b63ed8-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:21:25.437682 kubelet[2525]: I0913 00:21:25.437654 2525 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a11b08e4-771c-4220-8d05-979228b63ed8-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:21:25.448698 kubelet[2525]: I0913 00:21:25.447256 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-w28tn" podStartSLOduration=3.178197548 podStartE2EDuration="24.447227497s" podCreationTimestamp="2025-09-13 00:21:01 +0000 UTC" firstStartedPulling="2025-09-13 00:21:03.265061072 +0000 UTC m=+22.118502753" lastFinishedPulling="2025-09-13 00:21:24.534091021 +0000 UTC m=+43.387532702" observedRunningTime="2025-09-13 00:21:25.446873108 +0000 UTC m=+44.300314809" watchObservedRunningTime="2025-09-13 00:21:25.447227497 +0000 UTC m=+44.300669178" Sep 13 00:21:25.492713 systemd-networkd[1399]: cali08b2bfe43aa: Link UP Sep 13 00:21:25.493509 systemd-networkd[1399]: cali08b2bfe43aa: Gained carrier Sep 13 00:21:25.514137 systemd[1]: Started sshd@8-10.0.0.7:22-10.0.0.1:55772.service - OpenSSH per-connection server daemon (10.0.0.1:55772). Sep 13 00:21:25.524031 systemd[1]: Created slice kubepods-besteffort-pod7b34adde_0038_41f7_9291_bf9500f3598c.slice - libcontainer container kubepods-besteffort-pod7b34adde_0038_41f7_9291_bf9500f3598c.slice. Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.385 [INFO][3937] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.399 [INFO][3937] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--w2qff-eth0 coredns-674b8bbfcf- kube-system c31b50ac-aa0d-44d3-b1cb-a5fd228e8908 967 0 2025-09-13 00:20:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-w2qff eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali08b2bfe43aa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2qff" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2qff-" Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.399 [INFO][3937] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2qff" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.432 [INFO][3950] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" HandleID="k8s-pod-network.3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" Workload="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.432 [INFO][3950] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" HandleID="k8s-pod-network.3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" Workload="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011be70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-w2qff", "timestamp":"2025-09-13 00:21:25.432661123 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.432 [INFO][3950] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.432 [INFO][3950] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.433 [INFO][3950] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.443 [INFO][3950] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" host="localhost" Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.450 [INFO][3950] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.456 [INFO][3950] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.458 [INFO][3950] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.461 [INFO][3950] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.461 [INFO][3950] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" host="localhost" Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.463 [INFO][3950] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.467 [INFO][3950] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" host="localhost" Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.474 [INFO][3950] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" host="localhost" Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.474 [INFO][3950] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" host="localhost" Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.474 [INFO][3950] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:25.529184 containerd[1470]: 2025-09-13 00:21:25.475 [INFO][3950] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" HandleID="k8s-pod-network.3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" Workload="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:25.530697 containerd[1470]: 2025-09-13 00:21:25.479 [INFO][3937] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2qff" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--w2qff-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c31b50ac-aa0d-44d3-b1cb-a5fd228e8908", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-w2qff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08b2bfe43aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:25.530697 containerd[1470]: 2025-09-13 00:21:25.479 [INFO][3937] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2qff" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:25.530697 containerd[1470]: 2025-09-13 00:21:25.479 [INFO][3937] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08b2bfe43aa ContainerID="3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2qff" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:25.530697 containerd[1470]: 2025-09-13 00:21:25.493 [INFO][3937] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2qff" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:25.530697 containerd[1470]: 2025-09-13 00:21:25.495 [INFO][3937] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2qff" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--w2qff-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c31b50ac-aa0d-44d3-b1cb-a5fd228e8908", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d", Pod:"coredns-674b8bbfcf-w2qff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08b2bfe43aa", MAC:"4a:ed:7a:e2:0c:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:25.530697 containerd[1470]: 2025-09-13 00:21:25.524 [INFO][3937] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d" Namespace="kube-system" Pod="coredns-674b8bbfcf-w2qff" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:25.538887 kubelet[2525]: I0913 00:21:25.538734 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b34adde-0038-41f7-9291-bf9500f3598c-whisker-backend-key-pair\") pod \"whisker-6c95db994-qjbpw\" (UID: \"7b34adde-0038-41f7-9291-bf9500f3598c\") " pod="calico-system/whisker-6c95db994-qjbpw" Sep 13 00:21:25.538887 kubelet[2525]: I0913 00:21:25.538802 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b34adde-0038-41f7-9291-bf9500f3598c-whisker-ca-bundle\") pod \"whisker-6c95db994-qjbpw\" (UID: \"7b34adde-0038-41f7-9291-bf9500f3598c\") " pod="calico-system/whisker-6c95db994-qjbpw" Sep 13 00:21:25.538887 kubelet[2525]: I0913 00:21:25.538822 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2ts7\" (UniqueName: \"kubernetes.io/projected/7b34adde-0038-41f7-9291-bf9500f3598c-kube-api-access-g2ts7\") pod \"whisker-6c95db994-qjbpw\" (UID: \"7b34adde-0038-41f7-9291-bf9500f3598c\") " pod="calico-system/whisker-6c95db994-qjbpw" Sep 13 00:21:25.546201 systemd[1]: run-netns-cni\x2d30dc6572\x2d5a19\x2dcf56\x2d9a6e\x2d76099b4fb046.mount: Deactivated successfully. Sep 13 00:21:25.546342 systemd[1]: run-netns-cni\x2d30d60d5b\x2dcbb1\x2d75a8\x2d9004\x2d2b50b20452ce.mount: Deactivated successfully. Sep 13 00:21:25.546476 systemd[1]: var-lib-kubelet-pods-a11b08e4\x2d771c\x2d4220\x2d8d05\x2d979228b63ed8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqh6rr.mount: Deactivated successfully. Sep 13 00:21:25.546599 systemd[1]: var-lib-kubelet-pods-a11b08e4\x2d771c\x2d4220\x2d8d05\x2d979228b63ed8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:21:25.566593 sshd[3961]: Accepted publickey for core from 10.0.0.1 port 55772 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:21:25.567742 containerd[1470]: time="2025-09-13T00:21:25.567631688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:21:25.567742 containerd[1470]: time="2025-09-13T00:21:25.567690850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:21:25.567742 containerd[1470]: time="2025-09-13T00:21:25.567707181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:25.567915 containerd[1470]: time="2025-09-13T00:21:25.567798673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:25.569232 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:21:25.580379 systemd-logind[1449]: New session 9 of user core. Sep 13 00:21:25.585848 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:21:25.592069 systemd[1]: Started cri-containerd-3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d.scope - libcontainer container 3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d. Sep 13 00:21:25.607436 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:21:25.641796 containerd[1470]: time="2025-09-13T00:21:25.641736166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w2qff,Uid:c31b50ac-aa0d-44d3-b1cb-a5fd228e8908,Namespace:kube-system,Attempt:1,} returns sandbox id \"3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d\"" Sep 13 00:21:25.643241 kubelet[2525]: E0913 00:21:25.642942 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:25.656653 containerd[1470]: time="2025-09-13T00:21:25.656563902Z" level=info msg="CreateContainer within sandbox \"3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:21:25.685846 containerd[1470]: time="2025-09-13T00:21:25.685685779Z" level=info msg="CreateContainer within sandbox \"3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ee8674d4fecb51827fbcfa2d701b694558ad53aabff898057518cba78a87962\"" Sep 13 00:21:25.687853 containerd[1470]: time="2025-09-13T00:21:25.686790372Z" level=info msg="StartContainer for \"9ee8674d4fecb51827fbcfa2d701b694558ad53aabff898057518cba78a87962\"" Sep 13 00:21:25.723146 systemd[1]: Started cri-containerd-9ee8674d4fecb51827fbcfa2d701b694558ad53aabff898057518cba78a87962.scope - libcontainer container 9ee8674d4fecb51827fbcfa2d701b694558ad53aabff898057518cba78a87962. Sep 13 00:21:25.745844 sshd[3961]: pam_unix(sshd:session): session closed for user core Sep 13 00:21:25.749355 systemd[1]: sshd@8-10.0.0.7:22-10.0.0.1:55772.service: Deactivated successfully. Sep 13 00:21:25.752363 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:21:25.754297 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:21:25.756233 systemd-logind[1449]: Removed session 9. Sep 13 00:21:25.760376 containerd[1470]: time="2025-09-13T00:21:25.760334102Z" level=info msg="StartContainer for \"9ee8674d4fecb51827fbcfa2d701b694558ad53aabff898057518cba78a87962\" returns successfully" Sep 13 00:21:25.828791 containerd[1470]: time="2025-09-13T00:21:25.828713978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c95db994-qjbpw,Uid:7b34adde-0038-41f7-9291-bf9500f3598c,Namespace:calico-system,Attempt:0,}" Sep 13 00:21:25.953253 systemd-networkd[1399]: calid868ba28c38: Link UP Sep 13 00:21:25.953793 systemd-networkd[1399]: calid868ba28c38: Gained carrier Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.873 [INFO][4061] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.886 [INFO][4061] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6c95db994--qjbpw-eth0 whisker-6c95db994- calico-system 7b34adde-0038-41f7-9291-bf9500f3598c 986 0 2025-09-13 00:21:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c95db994 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6c95db994-qjbpw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid868ba28c38 [] [] }} ContainerID="60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" Namespace="calico-system" Pod="whisker-6c95db994-qjbpw" WorkloadEndpoint="localhost-k8s-whisker--6c95db994--qjbpw-" Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.886 [INFO][4061] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" Namespace="calico-system" Pod="whisker-6c95db994-qjbpw" WorkloadEndpoint="localhost-k8s-whisker--6c95db994--qjbpw-eth0" Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.914 [INFO][4077] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" HandleID="k8s-pod-network.60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" Workload="localhost-k8s-whisker--6c95db994--qjbpw-eth0" Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.914 [INFO][4077] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" HandleID="k8s-pod-network.60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" Workload="localhost-k8s-whisker--6c95db994--qjbpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012d480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6c95db994-qjbpw", "timestamp":"2025-09-13 00:21:25.914305763 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.914 [INFO][4077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.914 [INFO][4077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.914 [INFO][4077] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.922 [INFO][4077] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" host="localhost" Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.926 [INFO][4077] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.932 [INFO][4077] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.933 [INFO][4077] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.935 [INFO][4077] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.935 [INFO][4077] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" host="localhost" Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.936 [INFO][4077] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2 Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.941 [INFO][4077] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" host="localhost" Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.947 [INFO][4077] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" host="localhost" Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.947 [INFO][4077] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" host="localhost" Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.947 [INFO][4077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:25.969329 containerd[1470]: 2025-09-13 00:21:25.947 [INFO][4077] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" HandleID="k8s-pod-network.60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" Workload="localhost-k8s-whisker--6c95db994--qjbpw-eth0" Sep 13 00:21:25.969965 containerd[1470]: 2025-09-13 00:21:25.951 [INFO][4061] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" Namespace="calico-system" Pod="whisker-6c95db994-qjbpw" WorkloadEndpoint="localhost-k8s-whisker--6c95db994--qjbpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c95db994--qjbpw-eth0", GenerateName:"whisker-6c95db994-", Namespace:"calico-system", SelfLink:"", UID:"7b34adde-0038-41f7-9291-bf9500f3598c", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c95db994", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6c95db994-qjbpw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid868ba28c38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:25.969965 containerd[1470]: 2025-09-13 00:21:25.951 [INFO][4061] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" Namespace="calico-system" Pod="whisker-6c95db994-qjbpw" WorkloadEndpoint="localhost-k8s-whisker--6c95db994--qjbpw-eth0" Sep 13 00:21:25.969965 containerd[1470]: 2025-09-13 00:21:25.951 [INFO][4061] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid868ba28c38 ContainerID="60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" Namespace="calico-system" Pod="whisker-6c95db994-qjbpw" WorkloadEndpoint="localhost-k8s-whisker--6c95db994--qjbpw-eth0" Sep 13 00:21:25.969965 containerd[1470]: 2025-09-13 00:21:25.953 [INFO][4061] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" Namespace="calico-system" Pod="whisker-6c95db994-qjbpw" WorkloadEndpoint="localhost-k8s-whisker--6c95db994--qjbpw-eth0" Sep 13 00:21:25.969965 containerd[1470]: 2025-09-13 00:21:25.954 [INFO][4061] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" Namespace="calico-system" Pod="whisker-6c95db994-qjbpw" WorkloadEndpoint="localhost-k8s-whisker--6c95db994--qjbpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c95db994--qjbpw-eth0", GenerateName:"whisker-6c95db994-", Namespace:"calico-system", SelfLink:"", UID:"7b34adde-0038-41f7-9291-bf9500f3598c", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c95db994", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2", Pod:"whisker-6c95db994-qjbpw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid868ba28c38", MAC:"b6:93:ae:ca:41:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:25.969965 containerd[1470]: 2025-09-13 00:21:25.965 [INFO][4061] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2" Namespace="calico-system" Pod="whisker-6c95db994-qjbpw" WorkloadEndpoint="localhost-k8s-whisker--6c95db994--qjbpw-eth0" Sep 13 00:21:25.993951 containerd[1470]: time="2025-09-13T00:21:25.993345797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:21:25.993951 containerd[1470]: time="2025-09-13T00:21:25.993518132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:21:25.993951 containerd[1470]: time="2025-09-13T00:21:25.993560983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:25.993951 containerd[1470]: time="2025-09-13T00:21:25.993870576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:26.030935 systemd[1]: Started cri-containerd-60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2.scope - libcontainer container 60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2. Sep 13 00:21:26.046121 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:21:26.075580 containerd[1470]: time="2025-09-13T00:21:26.075529317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c95db994-qjbpw,Uid:7b34adde-0038-41f7-9291-bf9500f3598c,Namespace:calico-system,Attempt:0,} returns sandbox id \"60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2\"" Sep 13 00:21:26.077739 containerd[1470]: time="2025-09-13T00:21:26.077472450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:21:26.257792 containerd[1470]: time="2025-09-13T00:21:26.257723081Z" level=info msg="StopPodSandbox for \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\"" Sep 13 00:21:26.395340 containerd[1470]: 2025-09-13 00:21:26.339 [INFO][4141] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Sep 13 00:21:26.395340 containerd[1470]: 2025-09-13 00:21:26.339 [INFO][4141] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" iface="eth0" netns="/var/run/netns/cni-b4a12903-e2f3-d5ae-e63b-7689d3743c98" Sep 13 00:21:26.395340 containerd[1470]: 2025-09-13 00:21:26.339 [INFO][4141] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" iface="eth0" netns="/var/run/netns/cni-b4a12903-e2f3-d5ae-e63b-7689d3743c98" Sep 13 00:21:26.395340 containerd[1470]: 2025-09-13 00:21:26.340 [INFO][4141] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" iface="eth0" netns="/var/run/netns/cni-b4a12903-e2f3-d5ae-e63b-7689d3743c98" Sep 13 00:21:26.395340 containerd[1470]: 2025-09-13 00:21:26.340 [INFO][4141] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Sep 13 00:21:26.395340 containerd[1470]: 2025-09-13 00:21:26.340 [INFO][4141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Sep 13 00:21:26.395340 containerd[1470]: 2025-09-13 00:21:26.378 [INFO][4235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" HandleID="k8s-pod-network.2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Workload="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:26.395340 containerd[1470]: 2025-09-13 00:21:26.378 [INFO][4235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:26.395340 containerd[1470]: 2025-09-13 00:21:26.378 [INFO][4235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:26.395340 containerd[1470]: 2025-09-13 00:21:26.386 [WARNING][4235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" HandleID="k8s-pod-network.2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Workload="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:26.395340 containerd[1470]: 2025-09-13 00:21:26.386 [INFO][4235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" HandleID="k8s-pod-network.2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Workload="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:26.395340 containerd[1470]: 2025-09-13 00:21:26.388 [INFO][4235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:26.395340 containerd[1470]: 2025-09-13 00:21:26.391 [INFO][4141] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Sep 13 00:21:26.398310 containerd[1470]: time="2025-09-13T00:21:26.397841552Z" level=info msg="TearDown network for sandbox \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\" successfully" Sep 13 00:21:26.398310 containerd[1470]: time="2025-09-13T00:21:26.397898079Z" level=info msg="StopPodSandbox for \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\" returns successfully" Sep 13 00:21:26.399293 containerd[1470]: time="2025-09-13T00:21:26.399255438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f44cd849d-mzc4c,Uid:81467d75-5765-475d-8a7f-390251ac6b99,Namespace:calico-system,Attempt:1,}" Sep 13 00:21:26.431785 kubelet[2525]: E0913 00:21:26.431741 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:26.466777 kubelet[2525]: I0913 00:21:26.466700 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-w2qff" podStartSLOduration=38.466679341 podStartE2EDuration="38.466679341s" podCreationTimestamp="2025-09-13 00:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:21:26.450554451 +0000 UTC m=+45.303996132" watchObservedRunningTime="2025-09-13 00:21:26.466679341 +0000 UTC m=+45.320121022" Sep 13 00:21:26.550333 systemd[1]: run-netns-cni\x2db4a12903\x2de2f3\x2dd5ae\x2de63b\x2d7689d3743c98.mount: Deactivated successfully. Sep 13 00:21:26.613698 kernel: bpftool[4314]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:21:26.614355 systemd-networkd[1399]: calid1fa496aa3b: Link UP Sep 13 00:21:26.615528 systemd-networkd[1399]: calid1fa496aa3b: Gained carrier Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.453 [INFO][4249] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.481 [INFO][4249] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0 calico-kube-controllers-6f44cd849d- calico-system 81467d75-5765-475d-8a7f-390251ac6b99 1002 0 2025-09-13 00:21:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f44cd849d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6f44cd849d-mzc4c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid1fa496aa3b [] [] }} ContainerID="715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" Namespace="calico-system" Pod="calico-kube-controllers-6f44cd849d-mzc4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-" Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.482 [INFO][4249] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" Namespace="calico-system" Pod="calico-kube-controllers-6f44cd849d-mzc4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.543 [INFO][4269] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" HandleID="k8s-pod-network.715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" Workload="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.543 [INFO][4269] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" HandleID="k8s-pod-network.715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" Workload="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000131720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6f44cd849d-mzc4c", "timestamp":"2025-09-13 00:21:26.542978936 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.544 [INFO][4269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.544 [INFO][4269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.544 [INFO][4269] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.558 [INFO][4269] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" host="localhost" Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.566 [INFO][4269] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.575 [INFO][4269] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.587 [INFO][4269] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.591 [INFO][4269] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.591 [INFO][4269] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" host="localhost" Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.593 [INFO][4269] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.601 [INFO][4269] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" host="localhost" Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.607 [INFO][4269] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" host="localhost" Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.607 [INFO][4269] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" host="localhost" Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.607 [INFO][4269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:26.633130 containerd[1470]: 2025-09-13 00:21:26.607 [INFO][4269] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" HandleID="k8s-pod-network.715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" Workload="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:26.634687 containerd[1470]: 2025-09-13 00:21:26.611 [INFO][4249] cni-plugin/k8s.go 418: Populated endpoint ContainerID="715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" Namespace="calico-system" Pod="calico-kube-controllers-6f44cd849d-mzc4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0", GenerateName:"calico-kube-controllers-6f44cd849d-", Namespace:"calico-system", SelfLink:"", UID:"81467d75-5765-475d-8a7f-390251ac6b99", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 21, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f44cd849d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6f44cd849d-mzc4c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid1fa496aa3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:26.634687 containerd[1470]: 2025-09-13 00:21:26.611 [INFO][4249] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" Namespace="calico-system" Pod="calico-kube-controllers-6f44cd849d-mzc4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:26.634687 containerd[1470]: 2025-09-13 00:21:26.611 [INFO][4249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1fa496aa3b ContainerID="715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" Namespace="calico-system" Pod="calico-kube-controllers-6f44cd849d-mzc4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:26.634687 containerd[1470]: 2025-09-13 00:21:26.614 [INFO][4249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" Namespace="calico-system" Pod="calico-kube-controllers-6f44cd849d-mzc4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:26.634687 containerd[1470]: 2025-09-13 00:21:26.615 [INFO][4249] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" Namespace="calico-system" Pod="calico-kube-controllers-6f44cd849d-mzc4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0", GenerateName:"calico-kube-controllers-6f44cd849d-", Namespace:"calico-system", SelfLink:"", UID:"81467d75-5765-475d-8a7f-390251ac6b99", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 21, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f44cd849d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff", Pod:"calico-kube-controllers-6f44cd849d-mzc4c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid1fa496aa3b", MAC:"e6:f3:6a:8b:96:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:26.634687 containerd[1470]: 2025-09-13 00:21:26.629 [INFO][4249] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff" Namespace="calico-system" Pod="calico-kube-controllers-6f44cd849d-mzc4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:26.663299 containerd[1470]: time="2025-09-13T00:21:26.662896493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:21:26.663299 containerd[1470]: time="2025-09-13T00:21:26.663015859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:21:26.663299 containerd[1470]: time="2025-09-13T00:21:26.663031528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:26.663299 containerd[1470]: time="2025-09-13T00:21:26.663152626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:26.694960 systemd[1]: Started cri-containerd-715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff.scope - libcontainer container 715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff. Sep 13 00:21:26.712808 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:21:26.746735 containerd[1470]: time="2025-09-13T00:21:26.746651494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f44cd849d-mzc4c,Uid:81467d75-5765-475d-8a7f-390251ac6b99,Namespace:calico-system,Attempt:1,} returns sandbox id \"715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff\"" Sep 13 00:21:26.912213 systemd-networkd[1399]: vxlan.calico: Link UP Sep 13 00:21:26.912229 systemd-networkd[1399]: vxlan.calico: Gained carrier Sep 13 00:21:27.238740 systemd-networkd[1399]: cali08b2bfe43aa: Gained IPv6LL Sep 13 00:21:27.259895 containerd[1470]: time="2025-09-13T00:21:27.259841322Z" level=info msg="StopPodSandbox for \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\"" Sep 13 00:21:27.263066 kubelet[2525]: I0913 00:21:27.262699 2525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a11b08e4-771c-4220-8d05-979228b63ed8" path="/var/lib/kubelet/pods/a11b08e4-771c-4220-8d05-979228b63ed8/volumes" Sep 13 00:21:27.353679 containerd[1470]: 2025-09-13 00:21:27.309 [INFO][4443] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Sep 13 00:21:27.353679 containerd[1470]: 2025-09-13 00:21:27.310 [INFO][4443] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" iface="eth0" netns="/var/run/netns/cni-0a9ab8c6-9261-c5d7-8caf-e1f20f906f16" Sep 13 00:21:27.353679 containerd[1470]: 2025-09-13 00:21:27.310 [INFO][4443] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" iface="eth0" netns="/var/run/netns/cni-0a9ab8c6-9261-c5d7-8caf-e1f20f906f16" Sep 13 00:21:27.353679 containerd[1470]: 2025-09-13 00:21:27.310 [INFO][4443] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" iface="eth0" netns="/var/run/netns/cni-0a9ab8c6-9261-c5d7-8caf-e1f20f906f16" Sep 13 00:21:27.353679 containerd[1470]: 2025-09-13 00:21:27.310 [INFO][4443] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Sep 13 00:21:27.353679 containerd[1470]: 2025-09-13 00:21:27.310 [INFO][4443] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Sep 13 00:21:27.353679 containerd[1470]: 2025-09-13 00:21:27.337 [INFO][4454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" HandleID="k8s-pod-network.43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Workload="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:27.353679 containerd[1470]: 2025-09-13 00:21:27.337 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:27.353679 containerd[1470]: 2025-09-13 00:21:27.337 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:27.353679 containerd[1470]: 2025-09-13 00:21:27.345 [WARNING][4454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" HandleID="k8s-pod-network.43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Workload="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:27.353679 containerd[1470]: 2025-09-13 00:21:27.345 [INFO][4454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" HandleID="k8s-pod-network.43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Workload="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:27.353679 containerd[1470]: 2025-09-13 00:21:27.346 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:27.353679 containerd[1470]: 2025-09-13 00:21:27.350 [INFO][4443] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Sep 13 00:21:27.355128 containerd[1470]: time="2025-09-13T00:21:27.353950952Z" level=info msg="TearDown network for sandbox \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\" successfully" Sep 13 00:21:27.355128 containerd[1470]: time="2025-09-13T00:21:27.353993181Z" level=info msg="StopPodSandbox for \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\" returns successfully" Sep 13 00:21:27.355200 kubelet[2525]: E0913 00:21:27.354376 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:27.355251 containerd[1470]: time="2025-09-13T00:21:27.355132078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f4k9l,Uid:4eabb29c-8307-4ea2-a6d0-81142535e33a,Namespace:kube-system,Attempt:1,}" Sep 13 00:21:27.357903 systemd[1]: run-netns-cni\x2d0a9ab8c6\x2d9261\x2dc5d7\x2d8caf\x2de1f20f906f16.mount: Deactivated successfully. Sep 13 00:21:27.434342 kubelet[2525]: E0913 00:21:27.434310 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:27.648139 systemd-networkd[1399]: cali9320205bf87: Link UP Sep 13 00:21:27.648903 systemd-networkd[1399]: cali9320205bf87: Gained carrier Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.580 [INFO][4462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0 coredns-674b8bbfcf- kube-system 4eabb29c-8307-4ea2-a6d0-81142535e33a 1023 0 2025-09-13 00:20:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-f4k9l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9320205bf87 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4k9l" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f4k9l-" Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.580 [INFO][4462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4k9l" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.604 [INFO][4476] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" HandleID="k8s-pod-network.1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" Workload="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.605 [INFO][4476] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" HandleID="k8s-pod-network.1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" Workload="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-f4k9l", "timestamp":"2025-09-13 00:21:27.604952374 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.605 [INFO][4476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.605 [INFO][4476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.605 [INFO][4476] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.612 [INFO][4476] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" host="localhost" Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.616 [INFO][4476] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.620 [INFO][4476] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.621 [INFO][4476] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.623 [INFO][4476] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.624 [INFO][4476] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" host="localhost" Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.625 [INFO][4476] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4 Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.637 [INFO][4476] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" host="localhost" Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.642 [INFO][4476] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" host="localhost" Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.642 [INFO][4476] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" host="localhost" Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.642 [INFO][4476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:27.669912 containerd[1470]: 2025-09-13 00:21:27.642 [INFO][4476] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" HandleID="k8s-pod-network.1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" Workload="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:27.672425 containerd[1470]: 2025-09-13 00:21:27.646 [INFO][4462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4k9l" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4eabb29c-8307-4ea2-a6d0-81142535e33a", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-f4k9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9320205bf87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:27.672425 containerd[1470]: 2025-09-13 00:21:27.646 [INFO][4462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4k9l" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:27.672425 containerd[1470]: 2025-09-13 00:21:27.646 [INFO][4462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9320205bf87 ContainerID="1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4k9l" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:27.672425 containerd[1470]: 2025-09-13 00:21:27.648 [INFO][4462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4k9l" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:27.672425 containerd[1470]: 2025-09-13 00:21:27.651 [INFO][4462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4k9l" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4eabb29c-8307-4ea2-a6d0-81142535e33a", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4", Pod:"coredns-674b8bbfcf-f4k9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9320205bf87", MAC:"5e:2a:73:8a:93:91", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:27.672425 containerd[1470]: 2025-09-13 00:21:27.665 [INFO][4462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4k9l" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:27.700517 containerd[1470]: time="2025-09-13T00:21:27.700294947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:21:27.701159 containerd[1470]: time="2025-09-13T00:21:27.701035584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:21:27.701159 containerd[1470]: time="2025-09-13T00:21:27.701121545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:27.701593 containerd[1470]: time="2025-09-13T00:21:27.701560253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:27.727909 systemd[1]: Started cri-containerd-1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4.scope - libcontainer container 1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4. Sep 13 00:21:27.743440 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:21:27.751787 systemd-networkd[1399]: calid868ba28c38: Gained IPv6LL Sep 13 00:21:27.771855 containerd[1470]: time="2025-09-13T00:21:27.771812658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f4k9l,Uid:4eabb29c-8307-4ea2-a6d0-81142535e33a,Namespace:kube-system,Attempt:1,} returns sandbox id \"1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4\"" Sep 13 00:21:27.772553 kubelet[2525]: E0913 00:21:27.772518 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:27.785787 containerd[1470]: time="2025-09-13T00:21:27.785737148Z" level=info msg="CreateContainer within sandbox \"1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:21:27.793526 containerd[1470]: time="2025-09-13T00:21:27.793491356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:27.795584 containerd[1470]: time="2025-09-13T00:21:27.795538365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:21:27.796930 containerd[1470]: time="2025-09-13T00:21:27.796895533Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:27.800037 containerd[1470]: time="2025-09-13T00:21:27.800008721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:27.800820 containerd[1470]: time="2025-09-13T00:21:27.800685888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.723175748s" Sep 13 00:21:27.800820 containerd[1470]: time="2025-09-13T00:21:27.800727015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:21:27.804382 containerd[1470]: time="2025-09-13T00:21:27.804274674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:21:27.810420 containerd[1470]: time="2025-09-13T00:21:27.810362821Z" level=info msg="CreateContainer within sandbox \"1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c781b3efd214a26c07b5bef0cb3b301d89c6723ac58595afc3c5c5061870c80f\"" Sep 13 00:21:27.810961 containerd[1470]: time="2025-09-13T00:21:27.810926553Z" level=info msg="StartContainer for \"c781b3efd214a26c07b5bef0cb3b301d89c6723ac58595afc3c5c5061870c80f\"" Sep 13 00:21:27.812296 containerd[1470]: time="2025-09-13T00:21:27.812252332Z" level=info msg="CreateContainer within sandbox \"60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:21:27.829825 containerd[1470]: time="2025-09-13T00:21:27.829772400Z" level=info msg="CreateContainer within sandbox \"60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"278d2fd4b2f857c6e71b3c503fbd840990e5503f6f88c279283b5cfeeace0dc6\"" Sep 13 00:21:27.830558 containerd[1470]: time="2025-09-13T00:21:27.830478241Z" level=info msg="StartContainer for \"278d2fd4b2f857c6e71b3c503fbd840990e5503f6f88c279283b5cfeeace0dc6\"" Sep 13 00:21:27.844853 systemd[1]: Started cri-containerd-c781b3efd214a26c07b5bef0cb3b301d89c6723ac58595afc3c5c5061870c80f.scope - libcontainer container c781b3efd214a26c07b5bef0cb3b301d89c6723ac58595afc3c5c5061870c80f. Sep 13 00:21:27.867875 systemd[1]: Started cri-containerd-278d2fd4b2f857c6e71b3c503fbd840990e5503f6f88c279283b5cfeeace0dc6.scope - libcontainer container 278d2fd4b2f857c6e71b3c503fbd840990e5503f6f88c279283b5cfeeace0dc6. Sep 13 00:21:27.932790 containerd[1470]: time="2025-09-13T00:21:27.931048690Z" level=info msg="StartContainer for \"c781b3efd214a26c07b5bef0cb3b301d89c6723ac58595afc3c5c5061870c80f\" returns successfully" Sep 13 00:21:27.940556 containerd[1470]: time="2025-09-13T00:21:27.940499827Z" level=info msg="StartContainer for \"278d2fd4b2f857c6e71b3c503fbd840990e5503f6f88c279283b5cfeeace0dc6\" returns successfully" Sep 13 00:21:28.198846 systemd-networkd[1399]: calid1fa496aa3b: Gained IPv6LL Sep 13 00:21:28.439823 kubelet[2525]: E0913 00:21:28.439583 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:28.441438 kubelet[2525]: E0913 00:21:28.441399 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:28.617570 kubelet[2525]: I0913 00:21:28.617490 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f4k9l" podStartSLOduration=40.617470632 podStartE2EDuration="40.617470632s" podCreationTimestamp="2025-09-13 00:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:21:28.616281611 +0000 UTC m=+47.469723292" watchObservedRunningTime="2025-09-13 00:21:28.617470632 +0000 UTC m=+47.470912303" Sep 13 00:21:28.903864 systemd-networkd[1399]: vxlan.calico: Gained IPv6LL Sep 13 00:21:29.257589 containerd[1470]: time="2025-09-13T00:21:29.257514882Z" level=info msg="StopPodSandbox for \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\"" Sep 13 00:21:29.258361 containerd[1470]: time="2025-09-13T00:21:29.258085167Z" level=info msg="StopPodSandbox for \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\"" Sep 13 00:21:29.258452 containerd[1470]: time="2025-09-13T00:21:29.258436529Z" level=info msg="StopPodSandbox for \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\"" Sep 13 00:21:29.286921 systemd-networkd[1399]: cali9320205bf87: Gained IPv6LL Sep 13 00:21:29.443322 kubelet[2525]: E0913 00:21:29.443285 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:29.731219 containerd[1470]: 2025-09-13 00:21:29.657 [INFO][4655] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Sep 13 00:21:29.731219 containerd[1470]: 2025-09-13 00:21:29.658 [INFO][4655] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" iface="eth0" netns="/var/run/netns/cni-2719646d-7b1d-5ac9-dbbd-4d3ea7208368" Sep 13 00:21:29.731219 containerd[1470]: 2025-09-13 00:21:29.658 [INFO][4655] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" iface="eth0" netns="/var/run/netns/cni-2719646d-7b1d-5ac9-dbbd-4d3ea7208368" Sep 13 00:21:29.731219 containerd[1470]: 2025-09-13 00:21:29.658 [INFO][4655] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" iface="eth0" netns="/var/run/netns/cni-2719646d-7b1d-5ac9-dbbd-4d3ea7208368" Sep 13 00:21:29.731219 containerd[1470]: 2025-09-13 00:21:29.658 [INFO][4655] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Sep 13 00:21:29.731219 containerd[1470]: 2025-09-13 00:21:29.658 [INFO][4655] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Sep 13 00:21:29.731219 containerd[1470]: 2025-09-13 00:21:29.699 [INFO][4679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" HandleID="k8s-pod-network.903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Workload="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:29.731219 containerd[1470]: 2025-09-13 00:21:29.700 [INFO][4679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:29.731219 containerd[1470]: 2025-09-13 00:21:29.700 [INFO][4679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:29.731219 containerd[1470]: 2025-09-13 00:21:29.721 [WARNING][4679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" HandleID="k8s-pod-network.903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Workload="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:29.731219 containerd[1470]: 2025-09-13 00:21:29.721 [INFO][4679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" HandleID="k8s-pod-network.903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Workload="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:29.731219 containerd[1470]: 2025-09-13 00:21:29.723 [INFO][4679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:29.731219 containerd[1470]: 2025-09-13 00:21:29.727 [INFO][4655] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Sep 13 00:21:29.732755 containerd[1470]: time="2025-09-13T00:21:29.732575530Z" level=info msg="TearDown network for sandbox \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\" successfully" Sep 13 00:21:29.732755 containerd[1470]: time="2025-09-13T00:21:29.732713039Z" level=info msg="StopPodSandbox for \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\" returns successfully" Sep 13 00:21:29.733745 containerd[1470]: time="2025-09-13T00:21:29.733694820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8667466bbf-nvvbk,Uid:d5330e44-0081-46ea-b521-bfc339b36095,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:21:29.736631 systemd[1]: run-netns-cni\x2d2719646d\x2d7b1d\x2d5ac9\x2ddbbd\x2d4d3ea7208368.mount: Deactivated successfully. Sep 13 00:21:29.747603 containerd[1470]: 2025-09-13 00:21:29.662 [INFO][4657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Sep 13 00:21:29.747603 containerd[1470]: 2025-09-13 00:21:29.663 [INFO][4657] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" iface="eth0" netns="/var/run/netns/cni-21dd60ee-5c3a-65a5-b7d6-f05208e65824" Sep 13 00:21:29.747603 containerd[1470]: 2025-09-13 00:21:29.665 [INFO][4657] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" iface="eth0" netns="/var/run/netns/cni-21dd60ee-5c3a-65a5-b7d6-f05208e65824" Sep 13 00:21:29.747603 containerd[1470]: 2025-09-13 00:21:29.666 [INFO][4657] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" iface="eth0" netns="/var/run/netns/cni-21dd60ee-5c3a-65a5-b7d6-f05208e65824" Sep 13 00:21:29.747603 containerd[1470]: 2025-09-13 00:21:29.666 [INFO][4657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Sep 13 00:21:29.747603 containerd[1470]: 2025-09-13 00:21:29.666 [INFO][4657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Sep 13 00:21:29.747603 containerd[1470]: 2025-09-13 00:21:29.707 [INFO][4688] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" HandleID="k8s-pod-network.1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Workload="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:29.747603 containerd[1470]: 2025-09-13 00:21:29.707 [INFO][4688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:29.747603 containerd[1470]: 2025-09-13 00:21:29.723 [INFO][4688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:29.747603 containerd[1470]: 2025-09-13 00:21:29.736 [WARNING][4688] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" HandleID="k8s-pod-network.1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Workload="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:29.747603 containerd[1470]: 2025-09-13 00:21:29.736 [INFO][4688] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" HandleID="k8s-pod-network.1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Workload="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:29.747603 containerd[1470]: 2025-09-13 00:21:29.738 [INFO][4688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:29.747603 containerd[1470]: 2025-09-13 00:21:29.742 [INFO][4657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Sep 13 00:21:29.750397 containerd[1470]: time="2025-09-13T00:21:29.748075265Z" level=info msg="TearDown network for sandbox \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\" successfully" Sep 13 00:21:29.750397 containerd[1470]: time="2025-09-13T00:21:29.748117365Z" level=info msg="StopPodSandbox for \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\" returns successfully" Sep 13 00:21:29.750397 containerd[1470]: time="2025-09-13T00:21:29.749248266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-whpvq,Uid:337e4386-9d96-4f61-84bf-9854d2b2501c,Namespace:calico-system,Attempt:1,}" Sep 13 00:21:29.751886 systemd[1]: run-netns-cni\x2d21dd60ee\x2d5c3a\x2d65a5\x2db7d6\x2df05208e65824.mount: Deactivated successfully. Sep 13 00:21:29.762890 containerd[1470]: 2025-09-13 00:21:29.661 [INFO][4656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Sep 13 00:21:29.762890 containerd[1470]: 2025-09-13 00:21:29.661 [INFO][4656] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" iface="eth0" netns="/var/run/netns/cni-1064a329-000c-ee82-499b-5a50145576f4" Sep 13 00:21:29.762890 containerd[1470]: 2025-09-13 00:21:29.663 [INFO][4656] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" iface="eth0" netns="/var/run/netns/cni-1064a329-000c-ee82-499b-5a50145576f4" Sep 13 00:21:29.762890 containerd[1470]: 2025-09-13 00:21:29.663 [INFO][4656] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" iface="eth0" netns="/var/run/netns/cni-1064a329-000c-ee82-499b-5a50145576f4" Sep 13 00:21:29.762890 containerd[1470]: 2025-09-13 00:21:29.663 [INFO][4656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Sep 13 00:21:29.762890 containerd[1470]: 2025-09-13 00:21:29.663 [INFO][4656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Sep 13 00:21:29.762890 containerd[1470]: 2025-09-13 00:21:29.717 [INFO][4685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" HandleID="k8s-pod-network.0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Workload="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:29.762890 containerd[1470]: 2025-09-13 00:21:29.717 [INFO][4685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:29.762890 containerd[1470]: 2025-09-13 00:21:29.739 [INFO][4685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:29.762890 containerd[1470]: 2025-09-13 00:21:29.746 [WARNING][4685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" HandleID="k8s-pod-network.0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Workload="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:29.762890 containerd[1470]: 2025-09-13 00:21:29.746 [INFO][4685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" HandleID="k8s-pod-network.0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Workload="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:29.762890 containerd[1470]: 2025-09-13 00:21:29.750 [INFO][4685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:29.762890 containerd[1470]: 2025-09-13 00:21:29.759 [INFO][4656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Sep 13 00:21:29.803929 containerd[1470]: time="2025-09-13T00:21:29.763109023Z" level=info msg="TearDown network for sandbox \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\" successfully" Sep 13 00:21:29.803929 containerd[1470]: time="2025-09-13T00:21:29.763136003Z" level=info msg="StopPodSandbox for \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\" returns successfully" Sep 13 00:21:29.803929 containerd[1470]: time="2025-09-13T00:21:29.764022133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8667466bbf-45njh,Uid:5904f0eb-a84b-4d3d-ba8a-eed68333160b,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:21:29.771038 systemd[1]: run-netns-cni\x2d1064a329\x2d000c\x2dee82\x2d499b\x2d5a50145576f4.mount: Deactivated successfully. Sep 13 00:21:30.141335 systemd-networkd[1399]: cali7cca1946ddf: Link UP Sep 13 00:21:30.141716 systemd-networkd[1399]: cali7cca1946ddf: Gained carrier Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.046 [INFO][4708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0 calico-apiserver-8667466bbf- calico-apiserver d5330e44-0081-46ea-b521-bfc339b36095 1057 0 2025-09-13 00:20:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8667466bbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8667466bbf-nvvbk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7cca1946ddf [] [] }} ContainerID="f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-nvvbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-" Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.046 [INFO][4708] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-nvvbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.082 [INFO][4723] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" HandleID="k8s-pod-network.f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" Workload="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.082 [INFO][4723] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" HandleID="k8s-pod-network.f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" Workload="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8667466bbf-nvvbk", "timestamp":"2025-09-13 00:21:30.08256338 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.082 [INFO][4723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.082 [INFO][4723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.082 [INFO][4723] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.092 [INFO][4723] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" host="localhost" Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.097 [INFO][4723] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.102 [INFO][4723] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.106 [INFO][4723] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.110 [INFO][4723] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.112 [INFO][4723] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" host="localhost" Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.114 [INFO][4723] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233 Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.119 [INFO][4723] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" host="localhost" Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.129 [INFO][4723] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" host="localhost" Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.129 [INFO][4723] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" host="localhost" Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.129 [INFO][4723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:30.162279 containerd[1470]: 2025-09-13 00:21:30.129 [INFO][4723] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" HandleID="k8s-pod-network.f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" Workload="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:30.163294 containerd[1470]: 2025-09-13 00:21:30.136 [INFO][4708] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-nvvbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0", GenerateName:"calico-apiserver-8667466bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5330e44-0081-46ea-b521-bfc339b36095", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8667466bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8667466bbf-nvvbk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cca1946ddf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:30.163294 containerd[1470]: 2025-09-13 00:21:30.136 [INFO][4708] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-nvvbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:30.163294 containerd[1470]: 2025-09-13 00:21:30.136 [INFO][4708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cca1946ddf ContainerID="f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-nvvbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:30.163294 containerd[1470]: 2025-09-13 00:21:30.140 [INFO][4708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-nvvbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:30.163294 containerd[1470]: 2025-09-13 00:21:30.141 [INFO][4708] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-nvvbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0", GenerateName:"calico-apiserver-8667466bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5330e44-0081-46ea-b521-bfc339b36095", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8667466bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233", Pod:"calico-apiserver-8667466bbf-nvvbk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cca1946ddf", MAC:"4e:e8:db:9f:1b:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:30.163294 containerd[1470]: 2025-09-13 00:21:30.157 [INFO][4708] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-nvvbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:30.202981 containerd[1470]: time="2025-09-13T00:21:30.202073020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:21:30.202981 containerd[1470]: time="2025-09-13T00:21:30.202175543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:21:30.202981 containerd[1470]: time="2025-09-13T00:21:30.202190241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:30.202981 containerd[1470]: time="2025-09-13T00:21:30.202563173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:30.233882 systemd[1]: Started cri-containerd-f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233.scope - libcontainer container f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233. Sep 13 00:21:30.251476 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:21:30.259735 containerd[1470]: time="2025-09-13T00:21:30.259694905Z" level=info msg="StopPodSandbox for \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\"" Sep 13 00:21:30.283555 containerd[1470]: time="2025-09-13T00:21:30.282874052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8667466bbf-nvvbk,Uid:d5330e44-0081-46ea-b521-bfc339b36095,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233\"" Sep 13 00:21:30.434165 systemd-networkd[1399]: calic93d431670f: Link UP Sep 13 00:21:30.434678 systemd-networkd[1399]: calic93d431670f: Gained carrier Sep 13 00:21:30.453546 kubelet[2525]: E0913 00:21:30.449182 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:30.469082 containerd[1470]: 2025-09-13 00:21:30.383 [INFO][4831] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Sep 13 00:21:30.469082 containerd[1470]: 2025-09-13 00:21:30.383 [INFO][4831] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" iface="eth0" netns="/var/run/netns/cni-06d469bd-410b-ec15-9c4d-95da566adce2" Sep 13 00:21:30.469082 containerd[1470]: 2025-09-13 00:21:30.384 [INFO][4831] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" iface="eth0" netns="/var/run/netns/cni-06d469bd-410b-ec15-9c4d-95da566adce2" Sep 13 00:21:30.469082 containerd[1470]: 2025-09-13 00:21:30.385 [INFO][4831] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" iface="eth0" netns="/var/run/netns/cni-06d469bd-410b-ec15-9c4d-95da566adce2" Sep 13 00:21:30.469082 containerd[1470]: 2025-09-13 00:21:30.386 [INFO][4831] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Sep 13 00:21:30.469082 containerd[1470]: 2025-09-13 00:21:30.386 [INFO][4831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Sep 13 00:21:30.469082 containerd[1470]: 2025-09-13 00:21:30.421 [INFO][4845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" HandleID="k8s-pod-network.0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Workload="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:30.469082 containerd[1470]: 2025-09-13 00:21:30.421 [INFO][4845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:30.469082 containerd[1470]: 2025-09-13 00:21:30.421 [INFO][4845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:30.469082 containerd[1470]: 2025-09-13 00:21:30.435 [WARNING][4845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" HandleID="k8s-pod-network.0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Workload="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:30.469082 containerd[1470]: 2025-09-13 00:21:30.435 [INFO][4845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" HandleID="k8s-pod-network.0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Workload="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:30.469082 containerd[1470]: 2025-09-13 00:21:30.441 [INFO][4845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:30.469082 containerd[1470]: 2025-09-13 00:21:30.459 [INFO][4831] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Sep 13 00:21:30.471928 containerd[1470]: time="2025-09-13T00:21:30.469676443Z" level=info msg="TearDown network for sandbox \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\" successfully" Sep 13 00:21:30.471928 containerd[1470]: time="2025-09-13T00:21:30.469759440Z" level=info msg="StopPodSandbox for \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\" returns successfully" Sep 13 00:21:30.473662 containerd[1470]: time="2025-09-13T00:21:30.473009165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkpzj,Uid:fe5fc44c-61ad-4dd3-a615-acf708addb61,Namespace:calico-system,Attempt:1,}" Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.191 [INFO][4730] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--whpvq-eth0 goldmane-54d579b49d- calico-system 337e4386-9d96-4f61-84bf-9854d2b2501c 1059 0 2025-09-13 00:21:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-whpvq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic93d431670f [] [] }} ContainerID="99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" Namespace="calico-system" Pod="goldmane-54d579b49d-whpvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--whpvq-" Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.192 [INFO][4730] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" Namespace="calico-system" Pod="goldmane-54d579b49d-whpvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.228 [INFO][4788] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" HandleID="k8s-pod-network.99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" Workload="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.228 [INFO][4788] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" HandleID="k8s-pod-network.99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" Workload="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000426d80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-whpvq", "timestamp":"2025-09-13 00:21:30.228241762 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.228 [INFO][4788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.228 [INFO][4788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.228 [INFO][4788] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.311 [INFO][4788] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" host="localhost" Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.371 [INFO][4788] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.383 [INFO][4788] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.386 [INFO][4788] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.389 [INFO][4788] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.389 [INFO][4788] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" host="localhost" Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.391 [INFO][4788] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28 Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.405 [INFO][4788] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" host="localhost" Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.417 [INFO][4788] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" host="localhost" Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.417 [INFO][4788] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" host="localhost" Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.418 [INFO][4788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:30.474286 containerd[1470]: 2025-09-13 00:21:30.418 [INFO][4788] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" HandleID="k8s-pod-network.99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" Workload="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:30.475022 containerd[1470]: 2025-09-13 00:21:30.429 [INFO][4730] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" Namespace="calico-system" Pod="goldmane-54d579b49d-whpvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--whpvq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"337e4386-9d96-4f61-84bf-9854d2b2501c", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-whpvq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic93d431670f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:30.475022 containerd[1470]: 2025-09-13 00:21:30.429 [INFO][4730] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" Namespace="calico-system" Pod="goldmane-54d579b49d-whpvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:30.475022 containerd[1470]: 2025-09-13 00:21:30.429 [INFO][4730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic93d431670f ContainerID="99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" Namespace="calico-system" Pod="goldmane-54d579b49d-whpvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:30.475022 containerd[1470]: 2025-09-13 00:21:30.437 [INFO][4730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" Namespace="calico-system" Pod="goldmane-54d579b49d-whpvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:30.475022 containerd[1470]: 2025-09-13 00:21:30.438 [INFO][4730] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" Namespace="calico-system" Pod="goldmane-54d579b49d-whpvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--whpvq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"337e4386-9d96-4f61-84bf-9854d2b2501c", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28", Pod:"goldmane-54d579b49d-whpvq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic93d431670f", MAC:"26:ef:a5:02:69:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:30.475022 containerd[1470]: 2025-09-13 00:21:30.465 [INFO][4730] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28" Namespace="calico-system" Pod="goldmane-54d579b49d-whpvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:30.524413 containerd[1470]: time="2025-09-13T00:21:30.520799499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:21:30.524413 containerd[1470]: time="2025-09-13T00:21:30.520862519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:21:30.524413 containerd[1470]: time="2025-09-13T00:21:30.520916109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:30.524413 containerd[1470]: time="2025-09-13T00:21:30.520999817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:30.555946 systemd-networkd[1399]: cali68a4c8787ce: Link UP Sep 13 00:21:30.557248 systemd-networkd[1399]: cali68a4c8787ce: Gained carrier Sep 13 00:21:30.561142 systemd[1]: Started cri-containerd-99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28.scope - libcontainer container 99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28. Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.195 [INFO][4738] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0 calico-apiserver-8667466bbf- calico-apiserver 5904f0eb-a84b-4d3d-ba8a-eed68333160b 1058 0 2025-09-13 00:20:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8667466bbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8667466bbf-45njh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali68a4c8787ce [] [] }} ContainerID="aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-45njh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--45njh-" Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.195 [INFO][4738] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-45njh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.257 [INFO][4786] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" HandleID="k8s-pod-network.aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" Workload="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.257 [INFO][4786] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" HandleID="k8s-pod-network.aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" Workload="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000198960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8667466bbf-45njh", "timestamp":"2025-09-13 00:21:30.257281666 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.257 [INFO][4786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.439 [INFO][4786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.440 [INFO][4786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.466 [INFO][4786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" host="localhost" Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.475 [INFO][4786] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.484 [INFO][4786] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.493 [INFO][4786] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.498 [INFO][4786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.499 [INFO][4786] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" host="localhost" Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.501 [INFO][4786] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660 Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.510 [INFO][4786] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" host="localhost" Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.526 [INFO][4786] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" host="localhost" Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.526 [INFO][4786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" host="localhost" Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.526 [INFO][4786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:30.582076 containerd[1470]: 2025-09-13 00:21:30.526 [INFO][4786] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" HandleID="k8s-pod-network.aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" Workload="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:30.583119 containerd[1470]: 2025-09-13 00:21:30.533 [INFO][4738] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-45njh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0", GenerateName:"calico-apiserver-8667466bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"5904f0eb-a84b-4d3d-ba8a-eed68333160b", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8667466bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8667466bbf-45njh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68a4c8787ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:30.583119 containerd[1470]: 2025-09-13 00:21:30.533 [INFO][4738] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-45njh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:30.583119 containerd[1470]: 2025-09-13 00:21:30.533 [INFO][4738] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68a4c8787ce ContainerID="aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-45njh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:30.583119 containerd[1470]: 2025-09-13 00:21:30.552 [INFO][4738] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-45njh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:30.583119 containerd[1470]: 2025-09-13 00:21:30.558 [INFO][4738] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-45njh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0", GenerateName:"calico-apiserver-8667466bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"5904f0eb-a84b-4d3d-ba8a-eed68333160b", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8667466bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660", Pod:"calico-apiserver-8667466bbf-45njh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68a4c8787ce", MAC:"a2:a4:05:f5:ed:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:30.583119 containerd[1470]: 2025-09-13 00:21:30.574 [INFO][4738] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660" Namespace="calico-apiserver" Pod="calico-apiserver-8667466bbf-45njh" WorkloadEndpoint="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:30.596110 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:21:30.616897 containerd[1470]: time="2025-09-13T00:21:30.616369007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:21:30.616897 containerd[1470]: time="2025-09-13T00:21:30.616446874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:21:30.616897 containerd[1470]: time="2025-09-13T00:21:30.616464897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:30.616897 containerd[1470]: time="2025-09-13T00:21:30.616554386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:30.646974 systemd[1]: Started cri-containerd-aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660.scope - libcontainer container aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660. Sep 13 00:21:30.663137 containerd[1470]: time="2025-09-13T00:21:30.662930175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-whpvq,Uid:337e4386-9d96-4f61-84bf-9854d2b2501c,Namespace:calico-system,Attempt:1,} returns sandbox id \"99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28\"" Sep 13 00:21:30.673017 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:21:30.705119 containerd[1470]: time="2025-09-13T00:21:30.704928612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8667466bbf-45njh,Uid:5904f0eb-a84b-4d3d-ba8a-eed68333160b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660\"" Sep 13 00:21:30.725393 systemd-networkd[1399]: cali9b82d6e5fec: Link UP Sep 13 00:21:30.725796 systemd-networkd[1399]: cali9b82d6e5fec: Gained carrier Sep 13 00:21:30.744991 systemd[1]: run-netns-cni\x2d06d469bd\x2d410b\x2dec15\x2d9c4d\x2d95da566adce2.mount: Deactivated successfully. Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.588 [INFO][4877] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vkpzj-eth0 csi-node-driver- calico-system fe5fc44c-61ad-4dd3-a615-acf708addb61 1075 0 2025-09-13 00:21:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vkpzj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9b82d6e5fec [] [] }} ContainerID="67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" Namespace="calico-system" Pod="csi-node-driver-vkpzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--vkpzj-" Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.594 [INFO][4877] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" Namespace="calico-system" Pod="csi-node-driver-vkpzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.633 [INFO][4924] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" HandleID="k8s-pod-network.67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" Workload="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.633 [INFO][4924] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" HandleID="k8s-pod-network.67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" Workload="localhost-k8s-csi--node--driver--vkpzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003054a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vkpzj", "timestamp":"2025-09-13 00:21:30.63345523 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.634 [INFO][4924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.634 [INFO][4924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.634 [INFO][4924] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.643 [INFO][4924] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" host="localhost" Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.660 [INFO][4924] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.667 [INFO][4924] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.696 [INFO][4924] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.699 [INFO][4924] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.699 [INFO][4924] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" host="localhost" Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.704 [INFO][4924] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.710 [INFO][4924] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" host="localhost" Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.718 [INFO][4924] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" host="localhost" Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.718 [INFO][4924] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" host="localhost" Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.718 [INFO][4924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:30.754653 containerd[1470]: 2025-09-13 00:21:30.718 [INFO][4924] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" HandleID="k8s-pod-network.67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" Workload="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:30.755259 containerd[1470]: 2025-09-13 00:21:30.722 [INFO][4877] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" Namespace="calico-system" Pod="csi-node-driver-vkpzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--vkpzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vkpzj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe5fc44c-61ad-4dd3-a615-acf708addb61", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 21, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vkpzj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b82d6e5fec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:30.755259 containerd[1470]: 2025-09-13 00:21:30.722 [INFO][4877] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" Namespace="calico-system" Pod="csi-node-driver-vkpzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:30.755259 containerd[1470]: 2025-09-13 00:21:30.722 [INFO][4877] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b82d6e5fec ContainerID="67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" Namespace="calico-system" Pod="csi-node-driver-vkpzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:30.755259 containerd[1470]: 2025-09-13 00:21:30.726 [INFO][4877] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" Namespace="calico-system" Pod="csi-node-driver-vkpzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:30.755259 containerd[1470]: 2025-09-13 00:21:30.727 [INFO][4877] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" Namespace="calico-system" Pod="csi-node-driver-vkpzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--vkpzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vkpzj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe5fc44c-61ad-4dd3-a615-acf708addb61", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 21, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a", Pod:"csi-node-driver-vkpzj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b82d6e5fec", MAC:"de:27:d7:4c:52:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:30.755259 containerd[1470]: 2025-09-13 00:21:30.747 [INFO][4877] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a" Namespace="calico-system" Pod="csi-node-driver-vkpzj" WorkloadEndpoint="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:30.769088 systemd[1]: Started sshd@9-10.0.0.7:22-10.0.0.1:44336.service - OpenSSH per-connection server daemon (10.0.0.1:44336). Sep 13 00:21:30.782672 containerd[1470]: time="2025-09-13T00:21:30.781875530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:21:30.782672 containerd[1470]: time="2025-09-13T00:21:30.781960310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:21:30.782672 containerd[1470]: time="2025-09-13T00:21:30.781975419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:30.782672 containerd[1470]: time="2025-09-13T00:21:30.782093972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:21:30.811836 systemd[1]: Started cri-containerd-67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a.scope - libcontainer container 67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a. Sep 13 00:21:30.825132 sshd[4987]: Accepted publickey for core from 10.0.0.1 port 44336 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:21:30.826525 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:21:30.827188 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:21:30.833321 systemd-logind[1449]: New session 10 of user core. Sep 13 00:21:30.841770 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:21:30.846802 containerd[1470]: time="2025-09-13T00:21:30.846754593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vkpzj,Uid:fe5fc44c-61ad-4dd3-a615-acf708addb61,Namespace:calico-system,Attempt:1,} returns sandbox id \"67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a\"" Sep 13 00:21:31.036125 sshd[4987]: pam_unix(sshd:session): session closed for user core Sep 13 00:21:31.041951 systemd[1]: sshd@9-10.0.0.7:22-10.0.0.1:44336.service: Deactivated successfully. Sep 13 00:21:31.044299 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:21:31.045209 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:21:31.046322 systemd-logind[1449]: Removed session 10. Sep 13 00:21:31.655952 systemd-networkd[1399]: calic93d431670f: Gained IPv6LL Sep 13 00:21:31.718878 systemd-networkd[1399]: cali7cca1946ddf: Gained IPv6LL Sep 13 00:21:32.102846 systemd-networkd[1399]: cali9b82d6e5fec: Gained IPv6LL Sep 13 00:21:32.166887 systemd-networkd[1399]: cali68a4c8787ce: Gained IPv6LL Sep 13 00:21:33.762940 containerd[1470]: time="2025-09-13T00:21:33.762875059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:33.785870 containerd[1470]: time="2025-09-13T00:21:33.785834115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:21:33.862598 containerd[1470]: time="2025-09-13T00:21:33.862535337Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:33.904418 containerd[1470]: time="2025-09-13T00:21:33.904373120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:33.905924 containerd[1470]: time="2025-09-13T00:21:33.905873518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 6.1009443s" Sep 13 00:21:33.905924 containerd[1470]: time="2025-09-13T00:21:33.905914795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:21:33.907687 containerd[1470]: time="2025-09-13T00:21:33.907174119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:21:34.124482 containerd[1470]: time="2025-09-13T00:21:34.124349682Z" level=info msg="CreateContainer within sandbox \"715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:21:34.599241 containerd[1470]: time="2025-09-13T00:21:34.599141490Z" level=info msg="CreateContainer within sandbox \"715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bd56d916bf5f1745bc3b507fdef84c913f23004a8b0bad60b0e3618378efb6de\"" Sep 13 00:21:34.599920 containerd[1470]: time="2025-09-13T00:21:34.599880943Z" level=info msg="StartContainer for \"bd56d916bf5f1745bc3b507fdef84c913f23004a8b0bad60b0e3618378efb6de\"" Sep 13 00:21:34.632092 systemd[1]: Started cri-containerd-bd56d916bf5f1745bc3b507fdef84c913f23004a8b0bad60b0e3618378efb6de.scope - libcontainer container bd56d916bf5f1745bc3b507fdef84c913f23004a8b0bad60b0e3618378efb6de. Sep 13 00:21:34.684777 containerd[1470]: time="2025-09-13T00:21:34.684720685Z" level=info msg="StartContainer for \"bd56d916bf5f1745bc3b507fdef84c913f23004a8b0bad60b0e3618378efb6de\" returns successfully" Sep 13 00:21:35.484878 kubelet[2525]: I0913 00:21:35.484807 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f44cd849d-mzc4c" podStartSLOduration=25.326535229 podStartE2EDuration="32.484787682s" podCreationTimestamp="2025-09-13 00:21:03 +0000 UTC" firstStartedPulling="2025-09-13 00:21:26.748565083 +0000 UTC m=+45.602006764" lastFinishedPulling="2025-09-13 00:21:33.906817536 +0000 UTC m=+52.760259217" observedRunningTime="2025-09-13 00:21:35.484242295 +0000 UTC m=+54.337683986" watchObservedRunningTime="2025-09-13 00:21:35.484787682 +0000 UTC m=+54.338229364" Sep 13 00:21:36.056353 systemd[1]: Started sshd@10-10.0.0.7:22-10.0.0.1:44340.service - OpenSSH per-connection server daemon (10.0.0.1:44340). Sep 13 00:21:36.103919 sshd[5125]: Accepted publickey for core from 10.0.0.1 port 44340 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:21:36.106282 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:21:36.111387 systemd-logind[1449]: New session 11 of user core. Sep 13 00:21:36.118888 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:21:36.252635 sshd[5125]: pam_unix(sshd:session): session closed for user core Sep 13 00:21:36.257889 systemd[1]: sshd@10-10.0.0.7:22-10.0.0.1:44340.service: Deactivated successfully. Sep 13 00:21:36.259991 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:21:36.260892 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:21:36.261742 systemd-logind[1449]: Removed session 11. Sep 13 00:21:37.630569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4253189359.mount: Deactivated successfully. Sep 13 00:21:38.343335 containerd[1470]: time="2025-09-13T00:21:38.343275802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:38.344677 containerd[1470]: time="2025-09-13T00:21:38.344596528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:21:38.346411 containerd[1470]: time="2025-09-13T00:21:38.346346445Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:38.349244 containerd[1470]: time="2025-09-13T00:21:38.349181432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:38.350415 containerd[1470]: time="2025-09-13T00:21:38.350350601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.443124925s" Sep 13 00:21:38.350415 containerd[1470]: time="2025-09-13T00:21:38.350412668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:21:38.351733 containerd[1470]: time="2025-09-13T00:21:38.351674833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:21:38.356960 containerd[1470]: time="2025-09-13T00:21:38.356930265Z" level=info msg="CreateContainer within sandbox \"60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:21:38.376170 containerd[1470]: time="2025-09-13T00:21:38.376110448Z" level=info msg="CreateContainer within sandbox \"60a01ae715aecb7e470b26c9de60a7a507afb1eb632ab64c36230c48b023eac2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"698979661ef7808b9dcdb601e3185f6bf6cec1bbc12858ec9e02f55bf67abfa3\"" Sep 13 00:21:38.376835 containerd[1470]: time="2025-09-13T00:21:38.376772570Z" level=info msg="StartContainer for \"698979661ef7808b9dcdb601e3185f6bf6cec1bbc12858ec9e02f55bf67abfa3\"" Sep 13 00:21:38.423048 systemd[1]: Started cri-containerd-698979661ef7808b9dcdb601e3185f6bf6cec1bbc12858ec9e02f55bf67abfa3.scope - libcontainer container 698979661ef7808b9dcdb601e3185f6bf6cec1bbc12858ec9e02f55bf67abfa3. Sep 13 00:21:38.469069 containerd[1470]: time="2025-09-13T00:21:38.468943541Z" level=info msg="StartContainer for \"698979661ef7808b9dcdb601e3185f6bf6cec1bbc12858ec9e02f55bf67abfa3\" returns successfully" Sep 13 00:21:41.247009 containerd[1470]: time="2025-09-13T00:21:41.246954226Z" level=info msg="StopPodSandbox for \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\"" Sep 13 00:21:41.267333 systemd[1]: Started sshd@11-10.0.0.7:22-10.0.0.1:50464.service - OpenSSH per-connection server daemon (10.0.0.1:50464). Sep 13 00:21:41.319935 sshd[5218]: Accepted publickey for core from 10.0.0.1 port 50464 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:21:41.322579 sshd[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:21:41.327877 systemd-logind[1449]: New session 12 of user core. Sep 13 00:21:41.344784 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:21:41.371204 containerd[1470]: 2025-09-13 00:21:41.318 [WARNING][5211] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" WorkloadEndpoint="localhost-k8s-whisker--d846d4f55--j7f97-eth0" Sep 13 00:21:41.371204 containerd[1470]: 2025-09-13 00:21:41.318 [INFO][5211] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Sep 13 00:21:41.371204 containerd[1470]: 2025-09-13 00:21:41.318 [INFO][5211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" iface="eth0" netns="" Sep 13 00:21:41.371204 containerd[1470]: 2025-09-13 00:21:41.318 [INFO][5211] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Sep 13 00:21:41.371204 containerd[1470]: 2025-09-13 00:21:41.318 [INFO][5211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Sep 13 00:21:41.371204 containerd[1470]: 2025-09-13 00:21:41.350 [INFO][5224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" HandleID="k8s-pod-network.849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Workload="localhost-k8s-whisker--d846d4f55--j7f97-eth0" Sep 13 00:21:41.371204 containerd[1470]: 2025-09-13 00:21:41.351 [INFO][5224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:41.371204 containerd[1470]: 2025-09-13 00:21:41.351 [INFO][5224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:41.371204 containerd[1470]: 2025-09-13 00:21:41.359 [WARNING][5224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" HandleID="k8s-pod-network.849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Workload="localhost-k8s-whisker--d846d4f55--j7f97-eth0" Sep 13 00:21:41.371204 containerd[1470]: 2025-09-13 00:21:41.359 [INFO][5224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" HandleID="k8s-pod-network.849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Workload="localhost-k8s-whisker--d846d4f55--j7f97-eth0" Sep 13 00:21:41.371204 containerd[1470]: 2025-09-13 00:21:41.361 [INFO][5224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:41.371204 containerd[1470]: 2025-09-13 00:21:41.366 [INFO][5211] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Sep 13 00:21:41.376025 containerd[1470]: time="2025-09-13T00:21:41.371261824Z" level=info msg="TearDown network for sandbox \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\" successfully" Sep 13 00:21:41.376025 containerd[1470]: time="2025-09-13T00:21:41.371303665Z" level=info msg="StopPodSandbox for \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\" returns successfully" Sep 13 00:21:41.376734 containerd[1470]: time="2025-09-13T00:21:41.376690342Z" level=info msg="RemovePodSandbox for \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\"" Sep 13 00:21:41.378827 containerd[1470]: time="2025-09-13T00:21:41.378800015Z" level=info msg="Forcibly stopping sandbox \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\"" Sep 13 00:21:41.531339 sshd[5218]: pam_unix(sshd:session): session closed for user core Sep 13 00:21:41.534737 containerd[1470]: 2025-09-13 00:21:41.429 [WARNING][5244] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" WorkloadEndpoint="localhost-k8s-whisker--d846d4f55--j7f97-eth0" Sep 13 00:21:41.534737 containerd[1470]: 2025-09-13 00:21:41.429 [INFO][5244] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Sep 13 00:21:41.534737 containerd[1470]: 2025-09-13 00:21:41.429 [INFO][5244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" iface="eth0" netns="" Sep 13 00:21:41.534737 containerd[1470]: 2025-09-13 00:21:41.429 [INFO][5244] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Sep 13 00:21:41.534737 containerd[1470]: 2025-09-13 00:21:41.429 [INFO][5244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Sep 13 00:21:41.534737 containerd[1470]: 2025-09-13 00:21:41.467 [INFO][5260] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" HandleID="k8s-pod-network.849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Workload="localhost-k8s-whisker--d846d4f55--j7f97-eth0" Sep 13 00:21:41.534737 containerd[1470]: 2025-09-13 00:21:41.467 [INFO][5260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:41.534737 containerd[1470]: 2025-09-13 00:21:41.468 [INFO][5260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:41.534737 containerd[1470]: 2025-09-13 00:21:41.525 [WARNING][5260] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" HandleID="k8s-pod-network.849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Workload="localhost-k8s-whisker--d846d4f55--j7f97-eth0" Sep 13 00:21:41.534737 containerd[1470]: 2025-09-13 00:21:41.525 [INFO][5260] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" HandleID="k8s-pod-network.849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Workload="localhost-k8s-whisker--d846d4f55--j7f97-eth0" Sep 13 00:21:41.534737 containerd[1470]: 2025-09-13 00:21:41.528 [INFO][5260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:41.534737 containerd[1470]: 2025-09-13 00:21:41.531 [INFO][5244] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b" Sep 13 00:21:41.535124 containerd[1470]: time="2025-09-13T00:21:41.534777974Z" level=info msg="TearDown network for sandbox \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\" successfully" Sep 13 00:21:41.541479 systemd[1]: sshd@11-10.0.0.7:22-10.0.0.1:50464.service: Deactivated successfully. Sep 13 00:21:41.544247 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:21:41.546604 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:21:41.554122 systemd[1]: Started sshd@12-10.0.0.7:22-10.0.0.1:50474.service - OpenSSH per-connection server daemon (10.0.0.1:50474). Sep 13 00:21:41.556308 systemd-logind[1449]: Removed session 12. Sep 13 00:21:41.786024 containerd[1470]: time="2025-09-13T00:21:41.785882946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:21:41.786024 containerd[1470]: time="2025-09-13T00:21:41.785985925Z" level=info msg="RemovePodSandbox \"849e47d41b0a33fe2f3be1ca67ea904dbd5a1baeb28c6162b0407c67d38a325b\" returns successfully" Sep 13 00:21:41.786612 containerd[1470]: time="2025-09-13T00:21:41.786560704Z" level=info msg="StopPodSandbox for \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\"" Sep 13 00:21:41.797488 sshd[5274]: Accepted publickey for core from 10.0.0.1 port 50474 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:21:41.800065 sshd[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:21:41.808999 containerd[1470]: time="2025-09-13T00:21:41.806746639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:21:41.808999 containerd[1470]: time="2025-09-13T00:21:41.806995990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:41.807526 systemd-logind[1449]: New session 13 of user core. Sep 13 00:21:41.810388 containerd[1470]: time="2025-09-13T00:21:41.810358551Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:41.813150 containerd[1470]: time="2025-09-13T00:21:41.813111134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:41.814218 containerd[1470]: time="2025-09-13T00:21:41.814185247Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.462474706s" Sep 13 00:21:41.814290 containerd[1470]: time="2025-09-13T00:21:41.814223841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:21:41.815948 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:21:41.819844 containerd[1470]: time="2025-09-13T00:21:41.819810405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:21:41.829146 containerd[1470]: time="2025-09-13T00:21:41.829083124Z" level=info msg="CreateContainer within sandbox \"f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:21:41.849141 containerd[1470]: time="2025-09-13T00:21:41.849085464Z" level=info msg="CreateContainer within sandbox \"f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d4b61c1637874ef61f6cbc2ca7319e09f954c5cc4393fa070b5057c2f89b77d8\"" Sep 13 00:21:41.850466 containerd[1470]: time="2025-09-13T00:21:41.850366998Z" level=info msg="StartContainer for \"d4b61c1637874ef61f6cbc2ca7319e09f954c5cc4393fa070b5057c2f89b77d8\"" Sep 13 00:21:41.873664 containerd[1470]: 2025-09-13 00:21:41.827 [WARNING][5286] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--whpvq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"337e4386-9d96-4f61-84bf-9854d2b2501c", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28", Pod:"goldmane-54d579b49d-whpvq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic93d431670f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:41.873664 containerd[1470]: 2025-09-13 00:21:41.828 [INFO][5286] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Sep 13 00:21:41.873664 containerd[1470]: 2025-09-13 00:21:41.828 [INFO][5286] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" iface="eth0" netns="" Sep 13 00:21:41.873664 containerd[1470]: 2025-09-13 00:21:41.828 [INFO][5286] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Sep 13 00:21:41.873664 containerd[1470]: 2025-09-13 00:21:41.828 [INFO][5286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Sep 13 00:21:41.873664 containerd[1470]: 2025-09-13 00:21:41.855 [INFO][5299] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" HandleID="k8s-pod-network.1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Workload="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:41.873664 containerd[1470]: 2025-09-13 00:21:41.855 [INFO][5299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:41.873664 containerd[1470]: 2025-09-13 00:21:41.855 [INFO][5299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:41.873664 containerd[1470]: 2025-09-13 00:21:41.861 [WARNING][5299] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" HandleID="k8s-pod-network.1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Workload="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:41.873664 containerd[1470]: 2025-09-13 00:21:41.861 [INFO][5299] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" HandleID="k8s-pod-network.1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Workload="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:41.873664 containerd[1470]: 2025-09-13 00:21:41.862 [INFO][5299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:41.873664 containerd[1470]: 2025-09-13 00:21:41.868 [INFO][5286] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Sep 13 00:21:41.873664 containerd[1470]: time="2025-09-13T00:21:41.873250068Z" level=info msg="TearDown network for sandbox \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\" successfully" Sep 13 00:21:41.873664 containerd[1470]: time="2025-09-13T00:21:41.873285156Z" level=info msg="StopPodSandbox for \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\" returns successfully" Sep 13 00:21:41.875611 containerd[1470]: time="2025-09-13T00:21:41.875277252Z" level=info msg="RemovePodSandbox for \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\"" Sep 13 00:21:41.875611 containerd[1470]: time="2025-09-13T00:21:41.875318652Z" level=info msg="Forcibly stopping sandbox \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\"" Sep 13 00:21:41.920851 systemd[1]: Started cri-containerd-d4b61c1637874ef61f6cbc2ca7319e09f954c5cc4393fa070b5057c2f89b77d8.scope - libcontainer container d4b61c1637874ef61f6cbc2ca7319e09f954c5cc4393fa070b5057c2f89b77d8. Sep 13 00:21:41.994448 containerd[1470]: 2025-09-13 00:21:41.924 [WARNING][5328] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--whpvq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"337e4386-9d96-4f61-84bf-9854d2b2501c", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28", Pod:"goldmane-54d579b49d-whpvq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic93d431670f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:41.994448 containerd[1470]: 2025-09-13 00:21:41.924 [INFO][5328] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Sep 13 00:21:41.994448 containerd[1470]: 2025-09-13 00:21:41.924 [INFO][5328] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" iface="eth0" netns="" Sep 13 00:21:41.994448 containerd[1470]: 2025-09-13 00:21:41.924 [INFO][5328] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Sep 13 00:21:41.994448 containerd[1470]: 2025-09-13 00:21:41.924 [INFO][5328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Sep 13 00:21:41.994448 containerd[1470]: 2025-09-13 00:21:41.962 [INFO][5356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" HandleID="k8s-pod-network.1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Workload="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:41.994448 containerd[1470]: 2025-09-13 00:21:41.962 [INFO][5356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:41.994448 containerd[1470]: 2025-09-13 00:21:41.963 [INFO][5356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:41.994448 containerd[1470]: 2025-09-13 00:21:41.971 [WARNING][5356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" HandleID="k8s-pod-network.1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Workload="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:41.994448 containerd[1470]: 2025-09-13 00:21:41.971 [INFO][5356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" HandleID="k8s-pod-network.1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Workload="localhost-k8s-goldmane--54d579b49d--whpvq-eth0" Sep 13 00:21:41.994448 containerd[1470]: 2025-09-13 00:21:41.976 [INFO][5356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:41.994448 containerd[1470]: 2025-09-13 00:21:41.982 [INFO][5328] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a" Sep 13 00:21:41.994448 containerd[1470]: time="2025-09-13T00:21:41.994278136Z" level=info msg="TearDown network for sandbox \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\" successfully" Sep 13 00:21:41.999115 containerd[1470]: time="2025-09-13T00:21:41.999061218Z" level=info msg="StartContainer for \"d4b61c1637874ef61f6cbc2ca7319e09f954c5cc4393fa070b5057c2f89b77d8\" returns successfully" Sep 13 00:21:42.022397 containerd[1470]: time="2025-09-13T00:21:42.022179837Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:21:42.022397 containerd[1470]: time="2025-09-13T00:21:42.022260513Z" level=info msg="RemovePodSandbox \"1e8614c116e7ae7d5b78cf194a6573d4e081749fd61b970de84a21d12704550a\" returns successfully" Sep 13 00:21:42.024069 containerd[1470]: time="2025-09-13T00:21:42.024032551Z" level=info msg="StopPodSandbox for \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\"" Sep 13 00:21:42.084990 sshd[5274]: pam_unix(sshd:session): session closed for user core Sep 13 00:21:42.108078 systemd[1]: Started sshd@13-10.0.0.7:22-10.0.0.1:50484.service - OpenSSH per-connection server daemon (10.0.0.1:50484). Sep 13 00:21:42.109640 systemd[1]: sshd@12-10.0.0.7:22-10.0.0.1:50474.service: Deactivated successfully. Sep 13 00:21:42.114300 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:21:42.118033 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:21:42.125216 systemd-logind[1449]: Removed session 13. Sep 13 00:21:42.142240 containerd[1470]: 2025-09-13 00:21:42.070 [WARNING][5389] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0", GenerateName:"calico-kube-controllers-6f44cd849d-", Namespace:"calico-system", SelfLink:"", UID:"81467d75-5765-475d-8a7f-390251ac6b99", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 21, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f44cd849d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff", Pod:"calico-kube-controllers-6f44cd849d-mzc4c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid1fa496aa3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:42.142240 containerd[1470]: 2025-09-13 00:21:42.070 [INFO][5389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Sep 13 00:21:42.142240 containerd[1470]: 2025-09-13 00:21:42.070 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" iface="eth0" netns="" Sep 13 00:21:42.142240 containerd[1470]: 2025-09-13 00:21:42.070 [INFO][5389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Sep 13 00:21:42.142240 containerd[1470]: 2025-09-13 00:21:42.070 [INFO][5389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Sep 13 00:21:42.142240 containerd[1470]: 2025-09-13 00:21:42.125 [INFO][5403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" HandleID="k8s-pod-network.2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Workload="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:42.142240 containerd[1470]: 2025-09-13 00:21:42.125 [INFO][5403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:42.142240 containerd[1470]: 2025-09-13 00:21:42.125 [INFO][5403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:42.142240 containerd[1470]: 2025-09-13 00:21:42.133 [WARNING][5403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" HandleID="k8s-pod-network.2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Workload="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:42.142240 containerd[1470]: 2025-09-13 00:21:42.133 [INFO][5403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" HandleID="k8s-pod-network.2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Workload="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:42.142240 containerd[1470]: 2025-09-13 00:21:42.135 [INFO][5403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:42.142240 containerd[1470]: 2025-09-13 00:21:42.138 [INFO][5389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Sep 13 00:21:42.142710 containerd[1470]: time="2025-09-13T00:21:42.142284205Z" level=info msg="TearDown network for sandbox \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\" successfully" Sep 13 00:21:42.142710 containerd[1470]: time="2025-09-13T00:21:42.142309934Z" level=info msg="StopPodSandbox for \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\" returns successfully" Sep 13 00:21:42.143507 containerd[1470]: time="2025-09-13T00:21:42.143120548Z" level=info msg="RemovePodSandbox for \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\"" Sep 13 00:21:42.143507 containerd[1470]: time="2025-09-13T00:21:42.143150375Z" level=info msg="Forcibly stopping sandbox \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\"" Sep 13 00:21:42.159178 sshd[5410]: Accepted publickey for core from 10.0.0.1 port 50484 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:21:42.162324 sshd[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:21:42.168902 systemd-logind[1449]: New session 14 of user core. Sep 13 00:21:42.175089 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:21:42.235442 containerd[1470]: 2025-09-13 00:21:42.191 [WARNING][5427] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0", GenerateName:"calico-kube-controllers-6f44cd849d-", Namespace:"calico-system", SelfLink:"", UID:"81467d75-5765-475d-8a7f-390251ac6b99", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 21, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f44cd849d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"715f7148f8c0b14dddde4d6e976f492075f0f75fd5cbe443280930f6adaba5ff", Pod:"calico-kube-controllers-6f44cd849d-mzc4c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid1fa496aa3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:42.235442 containerd[1470]: 2025-09-13 00:21:42.191 [INFO][5427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Sep 13 00:21:42.235442 containerd[1470]: 2025-09-13 00:21:42.191 [INFO][5427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" iface="eth0" netns="" Sep 13 00:21:42.235442 containerd[1470]: 2025-09-13 00:21:42.191 [INFO][5427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Sep 13 00:21:42.235442 containerd[1470]: 2025-09-13 00:21:42.191 [INFO][5427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Sep 13 00:21:42.235442 containerd[1470]: 2025-09-13 00:21:42.216 [INFO][5437] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" HandleID="k8s-pod-network.2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Workload="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:42.235442 containerd[1470]: 2025-09-13 00:21:42.216 [INFO][5437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:42.235442 containerd[1470]: 2025-09-13 00:21:42.216 [INFO][5437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:42.235442 containerd[1470]: 2025-09-13 00:21:42.224 [WARNING][5437] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" HandleID="k8s-pod-network.2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Workload="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:42.235442 containerd[1470]: 2025-09-13 00:21:42.224 [INFO][5437] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" HandleID="k8s-pod-network.2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Workload="localhost-k8s-calico--kube--controllers--6f44cd849d--mzc4c-eth0" Sep 13 00:21:42.235442 containerd[1470]: 2025-09-13 00:21:42.228 [INFO][5437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:42.235442 containerd[1470]: 2025-09-13 00:21:42.232 [INFO][5427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d" Sep 13 00:21:42.239641 containerd[1470]: time="2025-09-13T00:21:42.236159879Z" level=info msg="TearDown network for sandbox \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\" successfully" Sep 13 00:21:42.242717 containerd[1470]: time="2025-09-13T00:21:42.242453199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:21:42.242717 containerd[1470]: time="2025-09-13T00:21:42.242526230Z" level=info msg="RemovePodSandbox \"2a3b8a1bea02743e75134a3ca4e8353c9af8291cb943f2803d7c8b41c001a27d\" returns successfully" Sep 13 00:21:42.243641 containerd[1470]: time="2025-09-13T00:21:42.243321874Z" level=info msg="StopPodSandbox for \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\"" Sep 13 00:21:42.338702 sshd[5410]: pam_unix(sshd:session): session closed for user core Sep 13 00:21:42.345214 systemd[1]: sshd@13-10.0.0.7:22-10.0.0.1:50484.service: Deactivated successfully. Sep 13 00:21:42.348393 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:21:42.349431 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:21:42.350588 systemd-logind[1449]: Removed session 14. Sep 13 00:21:42.352008 containerd[1470]: 2025-09-13 00:21:42.297 [WARNING][5461] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0", GenerateName:"calico-apiserver-8667466bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"5904f0eb-a84b-4d3d-ba8a-eed68333160b", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8667466bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660", Pod:"calico-apiserver-8667466bbf-45njh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68a4c8787ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:42.352008 containerd[1470]: 2025-09-13 00:21:42.297 [INFO][5461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Sep 13 00:21:42.352008 containerd[1470]: 2025-09-13 00:21:42.297 [INFO][5461] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" iface="eth0" netns="" Sep 13 00:21:42.352008 containerd[1470]: 2025-09-13 00:21:42.297 [INFO][5461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Sep 13 00:21:42.352008 containerd[1470]: 2025-09-13 00:21:42.297 [INFO][5461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Sep 13 00:21:42.352008 containerd[1470]: 2025-09-13 00:21:42.333 [INFO][5470] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" HandleID="k8s-pod-network.0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Workload="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:42.352008 containerd[1470]: 2025-09-13 00:21:42.333 [INFO][5470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:42.352008 containerd[1470]: 2025-09-13 00:21:42.333 [INFO][5470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:42.352008 containerd[1470]: 2025-09-13 00:21:42.340 [WARNING][5470] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" HandleID="k8s-pod-network.0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Workload="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:42.352008 containerd[1470]: 2025-09-13 00:21:42.340 [INFO][5470] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" HandleID="k8s-pod-network.0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Workload="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:42.352008 containerd[1470]: 2025-09-13 00:21:42.342 [INFO][5470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:42.352008 containerd[1470]: 2025-09-13 00:21:42.347 [INFO][5461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Sep 13 00:21:42.352855 containerd[1470]: time="2025-09-13T00:21:42.352066850Z" level=info msg="TearDown network for sandbox \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\" successfully" Sep 13 00:21:42.352855 containerd[1470]: time="2025-09-13T00:21:42.352097909Z" level=info msg="StopPodSandbox for \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\" returns successfully" Sep 13 00:21:42.352855 containerd[1470]: time="2025-09-13T00:21:42.352725821Z" level=info msg="RemovePodSandbox for \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\"" Sep 13 00:21:42.352855 containerd[1470]: time="2025-09-13T00:21:42.352758474Z" level=info msg="Forcibly stopping sandbox \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\"" Sep 13 00:21:42.556039 containerd[1470]: 2025-09-13 00:21:42.464 [WARNING][5489] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0", GenerateName:"calico-apiserver-8667466bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"5904f0eb-a84b-4d3d-ba8a-eed68333160b", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8667466bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660", Pod:"calico-apiserver-8667466bbf-45njh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68a4c8787ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:42.556039 containerd[1470]: 2025-09-13 00:21:42.464 [INFO][5489] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Sep 13 00:21:42.556039 containerd[1470]: 2025-09-13 00:21:42.464 [INFO][5489] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" iface="eth0" netns="" Sep 13 00:21:42.556039 containerd[1470]: 2025-09-13 00:21:42.464 [INFO][5489] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Sep 13 00:21:42.556039 containerd[1470]: 2025-09-13 00:21:42.464 [INFO][5489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Sep 13 00:21:42.556039 containerd[1470]: 2025-09-13 00:21:42.493 [INFO][5498] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" HandleID="k8s-pod-network.0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Workload="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:42.556039 containerd[1470]: 2025-09-13 00:21:42.493 [INFO][5498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:42.556039 containerd[1470]: 2025-09-13 00:21:42.493 [INFO][5498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:42.556039 containerd[1470]: 2025-09-13 00:21:42.547 [WARNING][5498] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" HandleID="k8s-pod-network.0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Workload="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:42.556039 containerd[1470]: 2025-09-13 00:21:42.547 [INFO][5498] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" HandleID="k8s-pod-network.0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Workload="localhost-k8s-calico--apiserver--8667466bbf--45njh-eth0" Sep 13 00:21:42.556039 containerd[1470]: 2025-09-13 00:21:42.550 [INFO][5498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:42.556039 containerd[1470]: 2025-09-13 00:21:42.553 [INFO][5489] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde" Sep 13 00:21:42.556877 containerd[1470]: time="2025-09-13T00:21:42.556066119Z" level=info msg="TearDown network for sandbox \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\" successfully" Sep 13 00:21:42.664844 kubelet[2525]: I0913 00:21:42.664673 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8667466bbf-nvvbk" podStartSLOduration=33.133853276 podStartE2EDuration="44.664656988s" podCreationTimestamp="2025-09-13 00:20:58 +0000 UTC" firstStartedPulling="2025-09-13 00:21:30.285640326 +0000 UTC m=+49.139082007" lastFinishedPulling="2025-09-13 00:21:41.816444038 +0000 UTC m=+60.669885719" observedRunningTime="2025-09-13 00:21:42.663975723 +0000 UTC m=+61.517417464" watchObservedRunningTime="2025-09-13 00:21:42.664656988 +0000 UTC m=+61.518098669" Sep 13 00:21:42.665373 kubelet[2525]: I0913 00:21:42.664915 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c95db994-qjbpw" podStartSLOduration=5.390672317 podStartE2EDuration="17.664910768s" podCreationTimestamp="2025-09-13 00:21:25 +0000 UTC" firstStartedPulling="2025-09-13 00:21:26.077203432 +0000 UTC m=+44.930645113" lastFinishedPulling="2025-09-13 00:21:38.351441883 +0000 UTC m=+57.204883564" observedRunningTime="2025-09-13 00:21:38.490351424 +0000 UTC m=+57.343793105" watchObservedRunningTime="2025-09-13 00:21:42.664910768 +0000 UTC m=+61.518352449" Sep 13 00:21:42.899074 containerd[1470]: time="2025-09-13T00:21:42.899014277Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:21:42.899074 containerd[1470]: time="2025-09-13T00:21:42.899086677Z" level=info msg="RemovePodSandbox \"0f2f7b934ca107ea50ac5f8150534a55d74f6d5985e39d83914944c5c0df9fde\" returns successfully" Sep 13 00:21:42.899791 containerd[1470]: time="2025-09-13T00:21:42.899753854Z" level=info msg="StopPodSandbox for \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\"" Sep 13 00:21:43.019185 containerd[1470]: 2025-09-13 00:21:42.980 [WARNING][5517] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vkpzj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe5fc44c-61ad-4dd3-a615-acf708addb61", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 21, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a", Pod:"csi-node-driver-vkpzj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b82d6e5fec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:43.019185 containerd[1470]: 2025-09-13 00:21:42.980 [INFO][5517] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Sep 13 00:21:43.019185 containerd[1470]: 2025-09-13 00:21:42.981 [INFO][5517] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" iface="eth0" netns="" Sep 13 00:21:43.019185 containerd[1470]: 2025-09-13 00:21:42.981 [INFO][5517] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Sep 13 00:21:43.019185 containerd[1470]: 2025-09-13 00:21:42.981 [INFO][5517] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Sep 13 00:21:43.019185 containerd[1470]: 2025-09-13 00:21:43.006 [INFO][5527] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" HandleID="k8s-pod-network.0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Workload="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:43.019185 containerd[1470]: 2025-09-13 00:21:43.006 [INFO][5527] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:43.019185 containerd[1470]: 2025-09-13 00:21:43.006 [INFO][5527] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:43.019185 containerd[1470]: 2025-09-13 00:21:43.011 [WARNING][5527] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" HandleID="k8s-pod-network.0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Workload="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:43.019185 containerd[1470]: 2025-09-13 00:21:43.011 [INFO][5527] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" HandleID="k8s-pod-network.0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Workload="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:43.019185 containerd[1470]: 2025-09-13 00:21:43.012 [INFO][5527] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:43.019185 containerd[1470]: 2025-09-13 00:21:43.015 [INFO][5517] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Sep 13 00:21:43.020734 containerd[1470]: time="2025-09-13T00:21:43.019216913Z" level=info msg="TearDown network for sandbox \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\" successfully" Sep 13 00:21:43.020734 containerd[1470]: time="2025-09-13T00:21:43.019244295Z" level=info msg="StopPodSandbox for \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\" returns successfully" Sep 13 00:21:43.020734 containerd[1470]: time="2025-09-13T00:21:43.019852267Z" level=info msg="RemovePodSandbox for \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\"" Sep 13 00:21:43.020734 containerd[1470]: time="2025-09-13T00:21:43.019900581Z" level=info msg="Forcibly stopping sandbox \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\"" Sep 13 00:21:43.096732 containerd[1470]: 2025-09-13 00:21:43.056 [WARNING][5544] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vkpzj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe5fc44c-61ad-4dd3-a615-acf708addb61", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 21, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a", Pod:"csi-node-driver-vkpzj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b82d6e5fec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:43.096732 containerd[1470]: 2025-09-13 00:21:43.057 [INFO][5544] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Sep 13 00:21:43.096732 containerd[1470]: 2025-09-13 00:21:43.057 [INFO][5544] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" iface="eth0" netns="" Sep 13 00:21:43.096732 containerd[1470]: 2025-09-13 00:21:43.057 [INFO][5544] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Sep 13 00:21:43.096732 containerd[1470]: 2025-09-13 00:21:43.057 [INFO][5544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Sep 13 00:21:43.096732 containerd[1470]: 2025-09-13 00:21:43.082 [INFO][5552] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" HandleID="k8s-pod-network.0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Workload="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:43.096732 containerd[1470]: 2025-09-13 00:21:43.082 [INFO][5552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:43.096732 containerd[1470]: 2025-09-13 00:21:43.082 [INFO][5552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:43.096732 containerd[1470]: 2025-09-13 00:21:43.088 [WARNING][5552] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" HandleID="k8s-pod-network.0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Workload="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:43.096732 containerd[1470]: 2025-09-13 00:21:43.088 [INFO][5552] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" HandleID="k8s-pod-network.0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Workload="localhost-k8s-csi--node--driver--vkpzj-eth0" Sep 13 00:21:43.096732 containerd[1470]: 2025-09-13 00:21:43.090 [INFO][5552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:43.096732 containerd[1470]: 2025-09-13 00:21:43.093 [INFO][5544] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3" Sep 13 00:21:43.097221 containerd[1470]: time="2025-09-13T00:21:43.096776705Z" level=info msg="TearDown network for sandbox \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\" successfully" Sep 13 00:21:43.102607 containerd[1470]: time="2025-09-13T00:21:43.102557370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:21:43.102607 containerd[1470]: time="2025-09-13T00:21:43.102606944Z" level=info msg="RemovePodSandbox \"0508c2929cd6b652bf97327a56c7f983546c913de9691fe925e7c9d5286ab5f3\" returns successfully" Sep 13 00:21:43.103578 containerd[1470]: time="2025-09-13T00:21:43.103110826Z" level=info msg="StopPodSandbox for \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\"" Sep 13 00:21:43.184479 containerd[1470]: 2025-09-13 00:21:43.140 [WARNING][5571] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4eabb29c-8307-4ea2-a6d0-81142535e33a", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4", Pod:"coredns-674b8bbfcf-f4k9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9320205bf87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:43.184479 containerd[1470]: 2025-09-13 00:21:43.140 [INFO][5571] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Sep 13 00:21:43.184479 containerd[1470]: 2025-09-13 00:21:43.141 [INFO][5571] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" iface="eth0" netns="" Sep 13 00:21:43.184479 containerd[1470]: 2025-09-13 00:21:43.141 [INFO][5571] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Sep 13 00:21:43.184479 containerd[1470]: 2025-09-13 00:21:43.141 [INFO][5571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Sep 13 00:21:43.184479 containerd[1470]: 2025-09-13 00:21:43.168 [INFO][5580] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" HandleID="k8s-pod-network.43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Workload="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:43.184479 containerd[1470]: 2025-09-13 00:21:43.168 [INFO][5580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:43.184479 containerd[1470]: 2025-09-13 00:21:43.168 [INFO][5580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:43.184479 containerd[1470]: 2025-09-13 00:21:43.174 [WARNING][5580] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" HandleID="k8s-pod-network.43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Workload="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:43.184479 containerd[1470]: 2025-09-13 00:21:43.175 [INFO][5580] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" HandleID="k8s-pod-network.43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Workload="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:43.184479 containerd[1470]: 2025-09-13 00:21:43.176 [INFO][5580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:43.184479 containerd[1470]: 2025-09-13 00:21:43.181 [INFO][5571] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Sep 13 00:21:43.184479 containerd[1470]: time="2025-09-13T00:21:43.184448161Z" level=info msg="TearDown network for sandbox \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\" successfully" Sep 13 00:21:43.184479 containerd[1470]: time="2025-09-13T00:21:43.184475915Z" level=info msg="StopPodSandbox for \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\" returns successfully" Sep 13 00:21:43.185420 containerd[1470]: time="2025-09-13T00:21:43.185082594Z" level=info msg="RemovePodSandbox for \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\"" Sep 13 00:21:43.185420 containerd[1470]: time="2025-09-13T00:21:43.185107542Z" level=info msg="Forcibly stopping sandbox \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\"" Sep 13 00:21:43.268969 containerd[1470]: 2025-09-13 00:21:43.228 [WARNING][5597] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4eabb29c-8307-4ea2-a6d0-81142535e33a", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f3e4c6ef313ec893144fed871fa441c04e2d03689b90f182a015c78663a6ca4", Pod:"coredns-674b8bbfcf-f4k9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9320205bf87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:43.268969 containerd[1470]: 2025-09-13 00:21:43.228 [INFO][5597] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Sep 13 00:21:43.268969 containerd[1470]: 2025-09-13 00:21:43.228 [INFO][5597] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" iface="eth0" netns="" Sep 13 00:21:43.268969 containerd[1470]: 2025-09-13 00:21:43.228 [INFO][5597] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Sep 13 00:21:43.268969 containerd[1470]: 2025-09-13 00:21:43.228 [INFO][5597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Sep 13 00:21:43.268969 containerd[1470]: 2025-09-13 00:21:43.254 [INFO][5605] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" HandleID="k8s-pod-network.43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Workload="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:43.268969 containerd[1470]: 2025-09-13 00:21:43.254 [INFO][5605] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:43.268969 containerd[1470]: 2025-09-13 00:21:43.255 [INFO][5605] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:43.268969 containerd[1470]: 2025-09-13 00:21:43.261 [WARNING][5605] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" HandleID="k8s-pod-network.43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Workload="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:43.268969 containerd[1470]: 2025-09-13 00:21:43.261 [INFO][5605] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" HandleID="k8s-pod-network.43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Workload="localhost-k8s-coredns--674b8bbfcf--f4k9l-eth0" Sep 13 00:21:43.268969 containerd[1470]: 2025-09-13 00:21:43.262 [INFO][5605] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:43.268969 containerd[1470]: 2025-09-13 00:21:43.266 [INFO][5597] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569" Sep 13 00:21:43.269485 containerd[1470]: time="2025-09-13T00:21:43.269010253Z" level=info msg="TearDown network for sandbox \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\" successfully" Sep 13 00:21:43.278989 containerd[1470]: time="2025-09-13T00:21:43.278912351Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:21:43.278989 containerd[1470]: time="2025-09-13T00:21:43.278989459Z" level=info msg="RemovePodSandbox \"43312ebf5ef734ac8e4e648f72ff09c278f8c0ca626fd73b11a774785b419569\" returns successfully" Sep 13 00:21:43.279484 containerd[1470]: time="2025-09-13T00:21:43.279461099Z" level=info msg="StopPodSandbox for \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\"" Sep 13 00:21:43.358148 containerd[1470]: 2025-09-13 00:21:43.317 [WARNING][5624] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0", GenerateName:"calico-apiserver-8667466bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5330e44-0081-46ea-b521-bfc339b36095", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8667466bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233", Pod:"calico-apiserver-8667466bbf-nvvbk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cca1946ddf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:43.358148 containerd[1470]: 2025-09-13 00:21:43.318 [INFO][5624] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Sep 13 00:21:43.358148 containerd[1470]: 2025-09-13 00:21:43.318 [INFO][5624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" iface="eth0" netns="" Sep 13 00:21:43.358148 containerd[1470]: 2025-09-13 00:21:43.318 [INFO][5624] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Sep 13 00:21:43.358148 containerd[1470]: 2025-09-13 00:21:43.318 [INFO][5624] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Sep 13 00:21:43.358148 containerd[1470]: 2025-09-13 00:21:43.343 [INFO][5633] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" HandleID="k8s-pod-network.903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Workload="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:43.358148 containerd[1470]: 2025-09-13 00:21:43.343 [INFO][5633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:43.358148 containerd[1470]: 2025-09-13 00:21:43.343 [INFO][5633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:43.358148 containerd[1470]: 2025-09-13 00:21:43.349 [WARNING][5633] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" HandleID="k8s-pod-network.903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Workload="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:43.358148 containerd[1470]: 2025-09-13 00:21:43.349 [INFO][5633] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" HandleID="k8s-pod-network.903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Workload="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:43.358148 containerd[1470]: 2025-09-13 00:21:43.351 [INFO][5633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:43.358148 containerd[1470]: 2025-09-13 00:21:43.354 [INFO][5624] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Sep 13 00:21:43.359786 containerd[1470]: time="2025-09-13T00:21:43.358199985Z" level=info msg="TearDown network for sandbox \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\" successfully" Sep 13 00:21:43.359786 containerd[1470]: time="2025-09-13T00:21:43.358229201Z" level=info msg="StopPodSandbox for \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\" returns successfully" Sep 13 00:21:43.359786 containerd[1470]: time="2025-09-13T00:21:43.358775874Z" level=info msg="RemovePodSandbox for \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\"" Sep 13 00:21:43.359786 containerd[1470]: time="2025-09-13T00:21:43.358818297Z" level=info msg="Forcibly stopping sandbox \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\"" Sep 13 00:21:43.449612 containerd[1470]: 2025-09-13 00:21:43.407 [WARNING][5649] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0", GenerateName:"calico-apiserver-8667466bbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5330e44-0081-46ea-b521-bfc339b36095", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8667466bbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f12a801820dd773517ba69580cd56ece2f482db76d9f53092b2b38df7cee8233", Pod:"calico-apiserver-8667466bbf-nvvbk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cca1946ddf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:43.449612 containerd[1470]: 2025-09-13 00:21:43.407 [INFO][5649] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Sep 13 00:21:43.449612 containerd[1470]: 2025-09-13 00:21:43.407 [INFO][5649] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" iface="eth0" netns="" Sep 13 00:21:43.449612 containerd[1470]: 2025-09-13 00:21:43.407 [INFO][5649] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Sep 13 00:21:43.449612 containerd[1470]: 2025-09-13 00:21:43.407 [INFO][5649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Sep 13 00:21:43.449612 containerd[1470]: 2025-09-13 00:21:43.434 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" HandleID="k8s-pod-network.903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Workload="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:43.449612 containerd[1470]: 2025-09-13 00:21:43.434 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:43.449612 containerd[1470]: 2025-09-13 00:21:43.434 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:43.449612 containerd[1470]: 2025-09-13 00:21:43.441 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" HandleID="k8s-pod-network.903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Workload="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:43.449612 containerd[1470]: 2025-09-13 00:21:43.441 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" HandleID="k8s-pod-network.903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Workload="localhost-k8s-calico--apiserver--8667466bbf--nvvbk-eth0" Sep 13 00:21:43.449612 containerd[1470]: 2025-09-13 00:21:43.442 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:43.449612 containerd[1470]: 2025-09-13 00:21:43.446 [INFO][5649] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2" Sep 13 00:21:43.450186 containerd[1470]: time="2025-09-13T00:21:43.449710420Z" level=info msg="TearDown network for sandbox \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\" successfully" Sep 13 00:21:43.454765 containerd[1470]: time="2025-09-13T00:21:43.454719647Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:21:43.454869 containerd[1470]: time="2025-09-13T00:21:43.454785324Z" level=info msg="RemovePodSandbox \"903077faa512a14eefa154dffac1216bba7ad2f6e57e7019702d4e4f064f77f2\" returns successfully" Sep 13 00:21:43.456844 containerd[1470]: time="2025-09-13T00:21:43.456803534Z" level=info msg="StopPodSandbox for \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\"" Sep 13 00:21:43.769505 containerd[1470]: 2025-09-13 00:21:43.498 [WARNING][5675] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--w2qff-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c31b50ac-aa0d-44d3-b1cb-a5fd228e8908", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d", Pod:"coredns-674b8bbfcf-w2qff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08b2bfe43aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:43.769505 containerd[1470]: 2025-09-13 00:21:43.498 [INFO][5675] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Sep 13 00:21:43.769505 containerd[1470]: 2025-09-13 00:21:43.498 [INFO][5675] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" iface="eth0" netns="" Sep 13 00:21:43.769505 containerd[1470]: 2025-09-13 00:21:43.498 [INFO][5675] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Sep 13 00:21:43.769505 containerd[1470]: 2025-09-13 00:21:43.498 [INFO][5675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Sep 13 00:21:43.769505 containerd[1470]: 2025-09-13 00:21:43.527 [INFO][5686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" HandleID="k8s-pod-network.2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Workload="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:43.769505 containerd[1470]: 2025-09-13 00:21:43.527 [INFO][5686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:43.769505 containerd[1470]: 2025-09-13 00:21:43.527 [INFO][5686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:43.769505 containerd[1470]: 2025-09-13 00:21:43.757 [WARNING][5686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" HandleID="k8s-pod-network.2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Workload="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:43.769505 containerd[1470]: 2025-09-13 00:21:43.757 [INFO][5686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" HandleID="k8s-pod-network.2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Workload="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:43.769505 containerd[1470]: 2025-09-13 00:21:43.762 [INFO][5686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:43.769505 containerd[1470]: 2025-09-13 00:21:43.766 [INFO][5675] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Sep 13 00:21:43.770255 containerd[1470]: time="2025-09-13T00:21:43.769590889Z" level=info msg="TearDown network for sandbox \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\" successfully" Sep 13 00:21:43.770255 containerd[1470]: time="2025-09-13T00:21:43.769614214Z" level=info msg="StopPodSandbox for \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\" returns successfully" Sep 13 00:21:43.770255 containerd[1470]: time="2025-09-13T00:21:43.770180195Z" level=info msg="RemovePodSandbox for \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\"" Sep 13 00:21:43.770255 containerd[1470]: time="2025-09-13T00:21:43.770203399Z" level=info msg="Forcibly stopping sandbox \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\"" Sep 13 00:21:43.854724 containerd[1470]: 2025-09-13 00:21:43.813 [WARNING][5704] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--w2qff-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c31b50ac-aa0d-44d3-b1cb-a5fd228e8908", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cbb5858f35daa7461aea729a2cbe4d9ea0d81856c128ca2dca0931ef3b47b5d", Pod:"coredns-674b8bbfcf-w2qff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08b2bfe43aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:21:43.854724 containerd[1470]: 2025-09-13 00:21:43.813 [INFO][5704] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Sep 13 00:21:43.854724 containerd[1470]: 2025-09-13 00:21:43.813 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" iface="eth0" netns="" Sep 13 00:21:43.854724 containerd[1470]: 2025-09-13 00:21:43.813 [INFO][5704] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Sep 13 00:21:43.854724 containerd[1470]: 2025-09-13 00:21:43.813 [INFO][5704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Sep 13 00:21:43.854724 containerd[1470]: 2025-09-13 00:21:43.841 [INFO][5713] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" HandleID="k8s-pod-network.2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Workload="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:43.854724 containerd[1470]: 2025-09-13 00:21:43.842 [INFO][5713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:21:43.854724 containerd[1470]: 2025-09-13 00:21:43.842 [INFO][5713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:21:43.854724 containerd[1470]: 2025-09-13 00:21:43.847 [WARNING][5713] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" HandleID="k8s-pod-network.2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Workload="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:43.854724 containerd[1470]: 2025-09-13 00:21:43.847 [INFO][5713] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" HandleID="k8s-pod-network.2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Workload="localhost-k8s-coredns--674b8bbfcf--w2qff-eth0" Sep 13 00:21:43.854724 containerd[1470]: 2025-09-13 00:21:43.848 [INFO][5713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:21:43.854724 containerd[1470]: 2025-09-13 00:21:43.851 [INFO][5704] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306" Sep 13 00:21:43.855154 containerd[1470]: time="2025-09-13T00:21:43.854762044Z" level=info msg="TearDown network for sandbox \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\" successfully" Sep 13 00:21:43.910764 containerd[1470]: time="2025-09-13T00:21:43.910705277Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:21:43.910764 containerd[1470]: time="2025-09-13T00:21:43.910774960Z" level=info msg="RemovePodSandbox \"2aef2af2be4c15be6be25930c3ff340c178e15af7d8977baf1c1d1dc39926306\" returns successfully" Sep 13 00:21:45.459047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276173429.mount: Deactivated successfully. Sep 13 00:21:46.243466 containerd[1470]: time="2025-09-13T00:21:46.243406037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:46.244214 containerd[1470]: time="2025-09-13T00:21:46.244177001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:21:46.245385 containerd[1470]: time="2025-09-13T00:21:46.245338757Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:46.247883 containerd[1470]: time="2025-09-13T00:21:46.247858575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:46.248654 containerd[1470]: time="2025-09-13T00:21:46.248604701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.428755491s" Sep 13 00:21:46.248654 containerd[1470]: time="2025-09-13T00:21:46.248649227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:21:46.249704 containerd[1470]: time="2025-09-13T00:21:46.249685371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:21:46.253949 containerd[1470]: time="2025-09-13T00:21:46.253907424Z" level=info msg="CreateContainer within sandbox \"99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:21:46.270457 containerd[1470]: time="2025-09-13T00:21:46.270406391Z" level=info msg="CreateContainer within sandbox \"99a81d27170208d022f3a710e1e1418688b0bf2a9f756e92445bc8f41150aa28\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"82adbb183a3246a991ad60e424f2f3bcccb7a662f31ae89844130d5cc2aaaa2e\"" Sep 13 00:21:46.272254 containerd[1470]: time="2025-09-13T00:21:46.271016676Z" level=info msg="StartContainer for \"82adbb183a3246a991ad60e424f2f3bcccb7a662f31ae89844130d5cc2aaaa2e\"" Sep 13 00:21:46.343774 systemd[1]: Started cri-containerd-82adbb183a3246a991ad60e424f2f3bcccb7a662f31ae89844130d5cc2aaaa2e.scope - libcontainer container 82adbb183a3246a991ad60e424f2f3bcccb7a662f31ae89844130d5cc2aaaa2e. Sep 13 00:21:46.406186 containerd[1470]: time="2025-09-13T00:21:46.406111681Z" level=info msg="StartContainer for \"82adbb183a3246a991ad60e424f2f3bcccb7a662f31ae89844130d5cc2aaaa2e\" returns successfully" Sep 13 00:21:46.927405 containerd[1470]: time="2025-09-13T00:21:46.927300734Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:46.928209 containerd[1470]: time="2025-09-13T00:21:46.928142324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:21:46.932087 containerd[1470]: time="2025-09-13T00:21:46.932031316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 682.317341ms" Sep 13 00:21:46.932160 containerd[1470]: time="2025-09-13T00:21:46.932093266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:21:46.933252 containerd[1470]: time="2025-09-13T00:21:46.933219322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:21:46.938981 containerd[1470]: time="2025-09-13T00:21:46.938935882Z" level=info msg="CreateContainer within sandbox \"aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:21:47.161170 containerd[1470]: time="2025-09-13T00:21:47.161087931Z" level=info msg="CreateContainer within sandbox \"aff468ba6863f7af2631c264cc752f5595d30f7bc1ef6ac0b040dfcb996cc660\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2603340de84492730322e68045e15989bb0c86771e62cd77c27a9ec1878b06db\"" Sep 13 00:21:47.161833 containerd[1470]: time="2025-09-13T00:21:47.161792686Z" level=info msg="StartContainer for \"2603340de84492730322e68045e15989bb0c86771e62cd77c27a9ec1878b06db\"" Sep 13 00:21:47.197800 systemd[1]: Started cri-containerd-2603340de84492730322e68045e15989bb0c86771e62cd77c27a9ec1878b06db.scope - libcontainer container 2603340de84492730322e68045e15989bb0c86771e62cd77c27a9ec1878b06db. Sep 13 00:21:47.250898 containerd[1470]: time="2025-09-13T00:21:47.250843216Z" level=info msg="StartContainer for \"2603340de84492730322e68045e15989bb0c86771e62cd77c27a9ec1878b06db\" returns successfully" Sep 13 00:21:47.360948 systemd[1]: Started sshd@14-10.0.0.7:22-10.0.0.1:50492.service - OpenSSH per-connection server daemon (10.0.0.1:50492). Sep 13 00:21:47.471579 sshd[5843]: Accepted publickey for core from 10.0.0.1 port 50492 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:21:47.474116 sshd[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:21:47.483210 systemd-logind[1449]: New session 15 of user core. Sep 13 00:21:47.491964 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:21:47.542419 kubelet[2525]: I0913 00:21:47.541762 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-whpvq" podStartSLOduration=30.957171728 podStartE2EDuration="46.541740968s" podCreationTimestamp="2025-09-13 00:21:01 +0000 UTC" firstStartedPulling="2025-09-13 00:21:30.664962495 +0000 UTC m=+49.518404176" lastFinishedPulling="2025-09-13 00:21:46.249531705 +0000 UTC m=+65.102973416" observedRunningTime="2025-09-13 00:21:46.523265614 +0000 UTC m=+65.376707295" watchObservedRunningTime="2025-09-13 00:21:47.541740968 +0000 UTC m=+66.395182649" Sep 13 00:21:47.877498 sshd[5843]: pam_unix(sshd:session): session closed for user core Sep 13 00:21:47.884594 systemd[1]: sshd@14-10.0.0.7:22-10.0.0.1:50492.service: Deactivated successfully. Sep 13 00:21:47.887134 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:21:47.888149 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:21:47.889430 systemd-logind[1449]: Removed session 15. Sep 13 00:21:48.455523 kubelet[2525]: I0913 00:21:48.455363 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8667466bbf-45njh" podStartSLOduration=34.230346142 podStartE2EDuration="50.455340932s" podCreationTimestamp="2025-09-13 00:20:58 +0000 UTC" firstStartedPulling="2025-09-13 00:21:30.708046759 +0000 UTC m=+49.561488441" lastFinishedPulling="2025-09-13 00:21:46.93304155 +0000 UTC m=+65.786483231" observedRunningTime="2025-09-13 00:21:47.542826175 +0000 UTC m=+66.396267856" watchObservedRunningTime="2025-09-13 00:21:48.455340932 +0000 UTC m=+67.308782623" Sep 13 00:21:49.703539 containerd[1470]: time="2025-09-13T00:21:49.703476582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:49.704346 containerd[1470]: time="2025-09-13T00:21:49.704310273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:21:49.709266 containerd[1470]: time="2025-09-13T00:21:49.709224834Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:49.712027 containerd[1470]: time="2025-09-13T00:21:49.711993753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:49.712696 containerd[1470]: time="2025-09-13T00:21:49.712664762Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.779406354s" Sep 13 00:21:49.712774 containerd[1470]: time="2025-09-13T00:21:49.712696873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:21:49.718129 containerd[1470]: time="2025-09-13T00:21:49.718076658Z" level=info msg="CreateContainer within sandbox \"67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:21:49.741693 containerd[1470]: time="2025-09-13T00:21:49.741527676Z" level=info msg="CreateContainer within sandbox \"67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1c6eaaade711df7fe47d3724462e09c1b72ec3f8f2f79f36d99cac97e49d5040\"" Sep 13 00:21:49.743723 containerd[1470]: time="2025-09-13T00:21:49.743689489Z" level=info msg="StartContainer for \"1c6eaaade711df7fe47d3724462e09c1b72ec3f8f2f79f36d99cac97e49d5040\"" Sep 13 00:21:49.785971 systemd[1]: run-containerd-runc-k8s.io-1c6eaaade711df7fe47d3724462e09c1b72ec3f8f2f79f36d99cac97e49d5040-runc.ILAYD5.mount: Deactivated successfully. Sep 13 00:21:49.796868 systemd[1]: Started cri-containerd-1c6eaaade711df7fe47d3724462e09c1b72ec3f8f2f79f36d99cac97e49d5040.scope - libcontainer container 1c6eaaade711df7fe47d3724462e09c1b72ec3f8f2f79f36d99cac97e49d5040. Sep 13 00:21:49.838320 containerd[1470]: time="2025-09-13T00:21:49.838195580Z" level=info msg="StartContainer for \"1c6eaaade711df7fe47d3724462e09c1b72ec3f8f2f79f36d99cac97e49d5040\" returns successfully" Sep 13 00:21:49.839984 containerd[1470]: time="2025-09-13T00:21:49.839934219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:21:52.888601 systemd[1]: Started sshd@15-10.0.0.7:22-10.0.0.1:50122.service - OpenSSH per-connection server daemon (10.0.0.1:50122). Sep 13 00:21:52.945531 sshd[5954]: Accepted publickey for core from 10.0.0.1 port 50122 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:21:52.947297 sshd[5954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:21:52.951234 systemd-logind[1449]: New session 16 of user core. Sep 13 00:21:52.964750 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:21:53.122763 sshd[5954]: pam_unix(sshd:session): session closed for user core Sep 13 00:21:53.127880 systemd[1]: sshd@15-10.0.0.7:22-10.0.0.1:50122.service: Deactivated successfully. Sep 13 00:21:53.130246 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:21:53.130958 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:21:53.132084 systemd-logind[1449]: Removed session 16. Sep 13 00:21:53.846404 containerd[1470]: time="2025-09-13T00:21:53.846333902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:53.847501 containerd[1470]: time="2025-09-13T00:21:53.847435864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:21:53.849468 containerd[1470]: time="2025-09-13T00:21:53.849405250Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:53.851748 containerd[1470]: time="2025-09-13T00:21:53.851706642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:21:53.852342 containerd[1470]: time="2025-09-13T00:21:53.852285081Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 4.012318961s" Sep 13 00:21:53.852342 containerd[1470]: time="2025-09-13T00:21:53.852334105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:21:53.858791 containerd[1470]: time="2025-09-13T00:21:53.858760044Z" level=info msg="CreateContainer within sandbox \"67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:21:53.875807 containerd[1470]: time="2025-09-13T00:21:53.875751357Z" level=info msg="CreateContainer within sandbox \"67eb6b293273ad88fb89a126e4dfa21dbdfe19aa95c16507cfb0d2c0ec78821a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"038e61bd3fb1e0be689ab56daf5ca58041a441e146625e29ad35f8718f3b74bd\"" Sep 13 00:21:53.876601 containerd[1470]: time="2025-09-13T00:21:53.876553506Z" level=info msg="StartContainer for \"038e61bd3fb1e0be689ab56daf5ca58041a441e146625e29ad35f8718f3b74bd\"" Sep 13 00:21:53.910089 systemd[1]: run-containerd-runc-k8s.io-038e61bd3fb1e0be689ab56daf5ca58041a441e146625e29ad35f8718f3b74bd-runc.UBqR1f.mount: Deactivated successfully. Sep 13 00:21:53.917784 systemd[1]: Started cri-containerd-038e61bd3fb1e0be689ab56daf5ca58041a441e146625e29ad35f8718f3b74bd.scope - libcontainer container 038e61bd3fb1e0be689ab56daf5ca58041a441e146625e29ad35f8718f3b74bd. Sep 13 00:21:53.955269 containerd[1470]: time="2025-09-13T00:21:53.955230135Z" level=info msg="StartContainer for \"038e61bd3fb1e0be689ab56daf5ca58041a441e146625e29ad35f8718f3b74bd\" returns successfully" Sep 13 00:21:54.412252 kubelet[2525]: I0913 00:21:54.412197 2525 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:21:54.413193 kubelet[2525]: I0913 00:21:54.413169 2525 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:21:54.559430 kubelet[2525]: I0913 00:21:54.559357 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vkpzj" podStartSLOduration=29.555741087 podStartE2EDuration="52.559340232s" podCreationTimestamp="2025-09-13 00:21:02 +0000 UTC" firstStartedPulling="2025-09-13 00:21:30.849545353 +0000 UTC m=+49.702987034" lastFinishedPulling="2025-09-13 00:21:53.853144498 +0000 UTC m=+72.706586179" observedRunningTime="2025-09-13 00:21:54.558823993 +0000 UTC m=+73.412265674" watchObservedRunningTime="2025-09-13 00:21:54.559340232 +0000 UTC m=+73.412781903" Sep 13 00:21:57.258662 kubelet[2525]: E0913 00:21:57.258432 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:21:58.134848 systemd[1]: Started sshd@16-10.0.0.7:22-10.0.0.1:50124.service - OpenSSH per-connection server daemon (10.0.0.1:50124). Sep 13 00:21:58.191176 sshd[6055]: Accepted publickey for core from 10.0.0.1 port 50124 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:21:58.193109 sshd[6055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:21:58.197223 systemd-logind[1449]: New session 17 of user core. Sep 13 00:21:58.202801 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:21:58.394861 sshd[6055]: pam_unix(sshd:session): session closed for user core Sep 13 00:21:58.400129 systemd[1]: sshd@16-10.0.0.7:22-10.0.0.1:50124.service: Deactivated successfully. Sep 13 00:21:58.402494 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:21:58.403267 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:21:58.404214 systemd-logind[1449]: Removed session 17. Sep 13 00:22:00.257871 kubelet[2525]: E0913 00:22:00.257825 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:22:03.408842 systemd[1]: Started sshd@17-10.0.0.7:22-10.0.0.1:33714.service - OpenSSH per-connection server daemon (10.0.0.1:33714). Sep 13 00:22:03.482351 sshd[6070]: Accepted publickey for core from 10.0.0.1 port 33714 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:22:03.484126 sshd[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:22:03.488719 systemd-logind[1449]: New session 18 of user core. Sep 13 00:22:03.503927 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:22:03.733230 sshd[6070]: pam_unix(sshd:session): session closed for user core Sep 13 00:22:03.739580 systemd[1]: sshd@17-10.0.0.7:22-10.0.0.1:33714.service: Deactivated successfully. Sep 13 00:22:03.742350 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:22:03.744275 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:22:03.746085 systemd-logind[1449]: Removed session 18. Sep 13 00:22:06.257854 kubelet[2525]: E0913 00:22:06.257786 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:22:08.751020 systemd[1]: Started sshd@18-10.0.0.7:22-10.0.0.1:33720.service - OpenSSH per-connection server daemon (10.0.0.1:33720). Sep 13 00:22:08.800476 sshd[6112]: Accepted publickey for core from 10.0.0.1 port 33720 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:22:08.802371 sshd[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:22:08.806279 systemd-logind[1449]: New session 19 of user core. Sep 13 00:22:08.812769 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:22:09.094858 sshd[6112]: pam_unix(sshd:session): session closed for user core Sep 13 00:22:09.103790 systemd[1]: sshd@18-10.0.0.7:22-10.0.0.1:33720.service: Deactivated successfully. Sep 13 00:22:09.105873 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:22:09.107794 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:22:09.116889 systemd[1]: Started sshd@19-10.0.0.7:22-10.0.0.1:33724.service - OpenSSH per-connection server daemon (10.0.0.1:33724). Sep 13 00:22:09.118005 systemd-logind[1449]: Removed session 19. Sep 13 00:22:09.146152 sshd[6126]: Accepted publickey for core from 10.0.0.1 port 33724 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:22:09.147903 sshd[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:22:09.152607 systemd-logind[1449]: New session 20 of user core. Sep 13 00:22:09.162822 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:22:09.397644 sshd[6126]: pam_unix(sshd:session): session closed for user core Sep 13 00:22:09.405491 systemd[1]: sshd@19-10.0.0.7:22-10.0.0.1:33724.service: Deactivated successfully. Sep 13 00:22:09.407404 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:22:09.408843 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:22:09.414923 systemd[1]: Started sshd@20-10.0.0.7:22-10.0.0.1:33738.service - OpenSSH per-connection server daemon (10.0.0.1:33738). Sep 13 00:22:09.416298 systemd-logind[1449]: Removed session 20. Sep 13 00:22:09.459373 sshd[6140]: Accepted publickey for core from 10.0.0.1 port 33738 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:22:09.461228 sshd[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:22:09.466044 systemd-logind[1449]: New session 21 of user core. Sep 13 00:22:09.470768 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:22:10.062954 sshd[6140]: pam_unix(sshd:session): session closed for user core Sep 13 00:22:10.070328 systemd[1]: sshd@20-10.0.0.7:22-10.0.0.1:33738.service: Deactivated successfully. Sep 13 00:22:10.076316 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:22:10.078923 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:22:10.094043 systemd[1]: Started sshd@21-10.0.0.7:22-10.0.0.1:43348.service - OpenSSH per-connection server daemon (10.0.0.1:43348). Sep 13 00:22:10.095555 systemd-logind[1449]: Removed session 21. Sep 13 00:22:10.127777 sshd[6159]: Accepted publickey for core from 10.0.0.1 port 43348 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:22:10.129338 sshd[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:22:10.133325 systemd-logind[1449]: New session 22 of user core. Sep 13 00:22:10.142757 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:22:10.450008 sshd[6159]: pam_unix(sshd:session): session closed for user core Sep 13 00:22:10.463002 systemd[1]: sshd@21-10.0.0.7:22-10.0.0.1:43348.service: Deactivated successfully. Sep 13 00:22:10.466167 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:22:10.468558 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:22:10.478437 systemd[1]: Started sshd@22-10.0.0.7:22-10.0.0.1:43350.service - OpenSSH per-connection server daemon (10.0.0.1:43350). Sep 13 00:22:10.479756 systemd-logind[1449]: Removed session 22. Sep 13 00:22:10.511245 sshd[6172]: Accepted publickey for core from 10.0.0.1 port 43350 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:22:10.513269 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:22:10.518368 systemd-logind[1449]: New session 23 of user core. Sep 13 00:22:10.527809 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:22:10.639548 sshd[6172]: pam_unix(sshd:session): session closed for user core Sep 13 00:22:10.643855 systemd[1]: sshd@22-10.0.0.7:22-10.0.0.1:43350.service: Deactivated successfully. Sep 13 00:22:10.646159 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:22:10.646872 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:22:10.647962 systemd-logind[1449]: Removed session 23. Sep 13 00:22:11.257738 kubelet[2525]: E0913 00:22:11.257696 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:22:15.654460 systemd[1]: Started sshd@23-10.0.0.7:22-10.0.0.1:43354.service - OpenSSH per-connection server daemon (10.0.0.1:43354). Sep 13 00:22:15.710534 sshd[6208]: Accepted publickey for core from 10.0.0.1 port 43354 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:22:15.713570 sshd[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:22:15.724111 systemd-logind[1449]: New session 24 of user core. Sep 13 00:22:15.740779 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:22:15.853992 sshd[6208]: pam_unix(sshd:session): session closed for user core Sep 13 00:22:15.858793 systemd[1]: sshd@23-10.0.0.7:22-10.0.0.1:43354.service: Deactivated successfully. Sep 13 00:22:15.861088 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:22:15.861996 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:22:15.863442 systemd-logind[1449]: Removed session 24. Sep 13 00:22:20.869633 systemd[1]: Started sshd@24-10.0.0.7:22-10.0.0.1:55316.service - OpenSSH per-connection server daemon (10.0.0.1:55316). Sep 13 00:22:20.913819 sshd[6250]: Accepted publickey for core from 10.0.0.1 port 55316 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:22:20.915583 sshd[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:22:20.920859 systemd-logind[1449]: New session 25 of user core. Sep 13 00:22:20.930772 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:22:21.151916 sshd[6250]: pam_unix(sshd:session): session closed for user core Sep 13 00:22:21.156669 systemd[1]: sshd@24-10.0.0.7:22-10.0.0.1:55316.service: Deactivated successfully. Sep 13 00:22:21.159542 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:22:21.160432 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:22:21.161684 systemd-logind[1449]: Removed session 25. Sep 13 00:22:26.164661 systemd[1]: Started sshd@25-10.0.0.7:22-10.0.0.1:55324.service - OpenSSH per-connection server daemon (10.0.0.1:55324). Sep 13 00:22:26.216422 sshd[6286]: Accepted publickey for core from 10.0.0.1 port 55324 ssh2: RSA SHA256:FzDeTyP4zuMFTcgqWnMiJDHE5ug/2IIJPVJ6aSFczcU Sep 13 00:22:26.218320 sshd[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:22:26.222842 systemd-logind[1449]: New session 26 of user core. Sep 13 00:22:26.226758 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:22:26.427527 sshd[6286]: pam_unix(sshd:session): session closed for user core Sep 13 00:22:26.432404 systemd[1]: sshd@25-10.0.0.7:22-10.0.0.1:55324.service: Deactivated successfully. Sep 13 00:22:26.435311 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:22:26.436407 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:22:26.438147 systemd-logind[1449]: Removed session 26.