Sep 13 00:15:21.029707 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:15:21.029734 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:15:21.029747 kernel: BIOS-provided physical RAM map: Sep 13 00:15:21.029754 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:15:21.029760 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:15:21.029767 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:15:21.029774 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 13 00:15:21.029781 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 13 00:15:21.029787 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:15:21.029797 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 00:15:21.030161 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:15:21.030169 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:15:21.030179 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 00:15:21.030186 kernel: NX (Execute Disable) protection: active Sep 13 00:15:21.030194 kernel: APIC: Static calls initialized Sep 13 00:15:21.030209 kernel: SMBIOS 2.8 present. Sep 13 00:15:21.030216 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 13 00:15:21.030223 kernel: Hypervisor detected: KVM Sep 13 00:15:21.030230 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:15:21.030236 kernel: kvm-clock: using sched offset of 4224882342 cycles Sep 13 00:15:21.030244 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:15:21.030251 kernel: tsc: Detected 2794.748 MHz processor Sep 13 00:15:21.030259 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:15:21.030266 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:15:21.030276 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 13 00:15:21.030284 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 00:15:21.030291 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:15:21.030298 kernel: Using GB pages for direct mapping Sep 13 00:15:21.030305 kernel: ACPI: Early table checksum verification disabled Sep 13 00:15:21.030312 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 13 00:15:21.030319 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:21.030326 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:21.030333 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:21.030343 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 13 00:15:21.030350 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:21.030358 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:21.030365 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:21.030372 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:21.030379 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 13 00:15:21.030386 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 13 00:15:21.030398 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 13 00:15:21.030408 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 13 00:15:21.030416 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 13 00:15:21.030423 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 13 00:15:21.030431 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 13 00:15:21.030440 kernel: No NUMA configuration found Sep 13 00:15:21.030448 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 13 00:15:21.030458 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 13 00:15:21.030466 kernel: Zone ranges: Sep 13 00:15:21.030473 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:15:21.030481 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 13 00:15:21.030488 kernel: Normal empty Sep 13 00:15:21.030495 kernel: Movable zone start for each node Sep 13 00:15:21.030503 kernel: Early memory node ranges Sep 13 00:15:21.030510 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:15:21.030518 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 13 00:15:21.030525 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 13 00:15:21.030535 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:15:21.030545 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:15:21.030552 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 00:15:21.030560 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:15:21.030567 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:15:21.030574 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:15:21.030582 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:15:21.030589 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:15:21.030597 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:15:21.030607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:15:21.030614 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:15:21.030621 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:15:21.030629 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:15:21.030636 kernel: TSC deadline timer available Sep 13 00:15:21.030643 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:15:21.030651 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:15:21.030661 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:15:21.030674 kernel: kvm-guest: setup PV sched yield Sep 13 00:15:21.030687 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 00:15:21.030694 kernel: Booting paravirtualized kernel on KVM Sep 13 00:15:21.030702 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:15:21.030709 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:15:21.030717 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 13 00:15:21.030724 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 13 00:15:21.030731 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:15:21.030738 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:15:21.030747 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:15:21.030761 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:15:21.030773 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:15:21.030783 kernel: random: crng init done Sep 13 00:15:21.030793 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:15:21.030825 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:15:21.030832 kernel: Fallback order for Node 0: 0 Sep 13 00:15:21.030854 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 13 00:15:21.030864 kernel: Policy zone: DMA32 Sep 13 00:15:21.030880 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:15:21.030891 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 136900K reserved, 0K cma-reserved) Sep 13 00:15:21.030901 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:15:21.030911 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:15:21.030921 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:15:21.030931 kernel: Dynamic Preempt: voluntary Sep 13 00:15:21.030941 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:15:21.030951 kernel: rcu: RCU event tracing is enabled. Sep 13 00:15:21.030959 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:15:21.030969 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:15:21.030977 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:15:21.030985 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:15:21.030992 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:15:21.031004 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:15:21.031011 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:15:21.031019 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:15:21.031026 kernel: Console: colour VGA+ 80x25 Sep 13 00:15:21.031034 kernel: printk: console [ttyS0] enabled Sep 13 00:15:21.031046 kernel: ACPI: Core revision 20230628 Sep 13 00:15:21.031057 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:15:21.031067 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:15:21.031077 kernel: x2apic enabled Sep 13 00:15:21.031088 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:15:21.031098 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 00:15:21.031108 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 00:15:21.031119 kernel: kvm-guest: setup PV IPIs Sep 13 00:15:21.031142 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:15:21.031153 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:15:21.031163 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 00:15:21.031174 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:15:21.031187 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:15:21.031198 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:15:21.031209 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:15:21.031220 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:15:21.031231 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:15:21.031245 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:15:21.031256 kernel: active return thunk: retbleed_return_thunk Sep 13 00:15:21.031270 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:15:21.031281 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:15:21.031291 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:15:21.031302 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 00:15:21.031314 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 00:15:21.031324 kernel: active return thunk: srso_return_thunk Sep 13 00:15:21.031339 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 00:15:21.031350 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:15:21.031375 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:15:21.031387 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:15:21.031406 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:15:21.031420 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:15:21.031442 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:15:21.031471 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:15:21.031501 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:15:21.031548 kernel: landlock: Up and running. Sep 13 00:15:21.031562 kernel: SELinux: Initializing. Sep 13 00:15:21.031591 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:15:21.031603 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:15:21.031614 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:15:21.031625 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:15:21.031636 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:15:21.031646 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:15:21.031665 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:15:21.031675 kernel: ... version: 0 Sep 13 00:15:21.031686 kernel: ... bit width: 48 Sep 13 00:15:21.031696 kernel: ... generic registers: 6 Sep 13 00:15:21.031707 kernel: ... value mask: 0000ffffffffffff Sep 13 00:15:21.031717 kernel: ... max period: 00007fffffffffff Sep 13 00:15:21.031727 kernel: ... fixed-purpose events: 0 Sep 13 00:15:21.031738 kernel: ... event mask: 000000000000003f Sep 13 00:15:21.031748 kernel: signal: max sigframe size: 1776 Sep 13 00:15:21.031759 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:15:21.031772 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:15:21.031783 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:15:21.031791 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:15:21.031799 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 00:15:21.031820 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:15:21.031828 kernel: smpboot: Max logical packages: 1 Sep 13 00:15:21.031836 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 00:15:21.031855 kernel: devtmpfs: initialized Sep 13 00:15:21.031865 kernel: x86/mm: Memory block size: 128MB Sep 13 00:15:21.031881 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:15:21.031891 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:15:21.031902 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:15:21.031912 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:15:21.031923 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:15:21.031933 kernel: audit: type=2000 audit(1757722519.885:1): state=initialized audit_enabled=0 res=1 Sep 13 00:15:21.031944 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:15:21.031954 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:15:21.031965 kernel: cpuidle: using governor menu Sep 13 00:15:21.031978 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:15:21.031987 kernel: dca service started, version 1.12.1 Sep 13 00:15:21.031995 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:15:21.032005 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 00:15:21.032013 kernel: PCI: Using configuration type 1 for base access Sep 13 00:15:21.032021 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:15:21.032029 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:15:21.032037 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:15:21.032045 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:15:21.032055 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:15:21.032063 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:15:21.032071 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:15:21.032079 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:15:21.032087 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:15:21.032094 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:15:21.032102 kernel: ACPI: Interpreter enabled Sep 13 00:15:21.032110 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:15:21.032118 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:15:21.032128 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:15:21.032136 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:15:21.032144 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:15:21.032152 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:15:21.032401 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:15:21.032580 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:15:21.032749 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:15:21.032770 kernel: PCI host bridge to bus 0000:00 Sep 13 00:15:21.032978 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:15:21.033262 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:15:21.033403 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:15:21.033575 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:15:21.033743 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:15:21.033937 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 00:15:21.034069 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:15:21.034237 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:15:21.034389 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:15:21.034539 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 13 00:15:21.034702 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 13 00:15:21.034865 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 13 00:15:21.035001 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:15:21.035160 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:15:21.035346 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 13 00:15:21.035537 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 13 00:15:21.035713 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 13 00:15:21.035974 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:15:21.036160 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:15:21.036401 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 13 00:15:21.036582 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 13 00:15:21.036752 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:15:21.036913 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 13 00:15:21.037090 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 13 00:15:21.037229 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 13 00:15:21.037359 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 13 00:15:21.037511 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:15:21.037644 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:15:21.037814 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:15:21.037963 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 13 00:15:21.038106 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 13 00:15:21.038258 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:15:21.038389 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 00:15:21.038406 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:15:21.038414 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:15:21.038422 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:15:21.038430 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:15:21.038438 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:15:21.038446 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:15:21.038455 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:15:21.038463 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:15:21.038471 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:15:21.038482 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:15:21.038491 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:15:21.038499 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:15:21.038507 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:15:21.038515 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:15:21.038523 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:15:21.038531 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:15:21.038539 kernel: iommu: Default domain type: Translated Sep 13 00:15:21.038547 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:15:21.038558 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:15:21.038566 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:15:21.038574 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:15:21.038582 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 13 00:15:21.038725 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:15:21.038906 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:15:21.039071 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:15:21.039085 kernel: vgaarb: loaded Sep 13 00:15:21.039100 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:15:21.039108 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:15:21.039116 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:15:21.039124 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:15:21.039133 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:15:21.039141 kernel: pnp: PnP ACPI init Sep 13 00:15:21.039307 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:15:21.039322 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:15:21.039335 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:15:21.039343 kernel: NET: Registered PF_INET protocol family Sep 13 00:15:21.039351 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:15:21.039359 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:15:21.039368 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:15:21.039376 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:15:21.039384 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:15:21.039392 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:15:21.039400 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:15:21.039411 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:15:21.039419 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:15:21.039428 kernel: NET: Registered PF_XDP protocol family Sep 13 00:15:21.039556 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:15:21.039676 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:15:21.039820 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:15:21.039953 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:15:21.040103 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:15:21.040232 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 00:15:21.040243 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:15:21.040251 kernel: Initialise system trusted keyrings Sep 13 00:15:21.040259 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:15:21.040267 kernel: Key type asymmetric registered Sep 13 00:15:21.040275 kernel: Asymmetric key parser 'x509' registered Sep 13 00:15:21.040283 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:15:21.040291 kernel: io scheduler mq-deadline registered Sep 13 00:15:21.040299 kernel: io scheduler kyber registered Sep 13 00:15:21.040311 kernel: io scheduler bfq registered Sep 13 00:15:21.040319 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:15:21.040328 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:15:21.040336 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:15:21.040344 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:15:21.040352 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:15:21.040361 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:15:21.040369 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:15:21.040377 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:15:21.040388 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:15:21.040549 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:15:21.040563 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:15:21.040689 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:15:21.040700 kernel: hpet: Lost 1 RTC interrupts Sep 13 00:15:21.040945 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:15:20 UTC (1757722520) Sep 13 00:15:21.041081 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:15:21.041093 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 00:15:21.041107 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:15:21.041115 kernel: Segment Routing with IPv6 Sep 13 00:15:21.041123 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:15:21.041131 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:15:21.041139 kernel: Key type dns_resolver registered Sep 13 00:15:21.041147 kernel: IPI shorthand broadcast: enabled Sep 13 00:15:21.041155 kernel: sched_clock: Marking stable (924002891, 136995457)->(1136330165, -75331817) Sep 13 00:15:21.041163 kernel: registered taskstats version 1 Sep 13 00:15:21.041171 kernel: Loading compiled-in X.509 certificates Sep 13 00:15:21.041182 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:15:21.041190 kernel: Key type .fscrypt registered Sep 13 00:15:21.041198 kernel: Key type fscrypt-provisioning registered Sep 13 00:15:21.041207 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:15:21.041215 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:15:21.041223 kernel: ima: No architecture policies found Sep 13 00:15:21.041231 kernel: clk: Disabling unused clocks Sep 13 00:15:21.041239 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:15:21.041247 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:15:21.041258 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:15:21.041266 kernel: Run /init as init process Sep 13 00:15:21.041274 kernel: with arguments: Sep 13 00:15:21.041282 kernel: /init Sep 13 00:15:21.041290 kernel: with environment: Sep 13 00:15:21.041298 kernel: HOME=/ Sep 13 00:15:21.041306 kernel: TERM=linux Sep 13 00:15:21.041314 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:15:21.041324 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:15:21.041338 systemd[1]: Detected virtualization kvm. Sep 13 00:15:21.041347 systemd[1]: Detected architecture x86-64. Sep 13 00:15:21.041356 systemd[1]: Running in initrd. Sep 13 00:15:21.041364 systemd[1]: No hostname configured, using default hostname. Sep 13 00:15:21.041372 systemd[1]: Hostname set to . Sep 13 00:15:21.041382 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:15:21.041390 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:15:21.041402 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:15:21.041411 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:15:21.041433 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:15:21.041444 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:15:21.041454 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:15:21.041463 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:15:21.041476 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:15:21.041485 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:15:21.041494 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:15:21.041503 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:15:21.041512 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:15:21.041521 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:15:21.041530 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:15:21.041541 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:15:21.041550 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:15:21.041559 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:15:21.041568 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:15:21.041577 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:15:21.041586 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:15:21.041595 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:15:21.041604 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:15:21.041612 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:15:21.041624 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:15:21.041633 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:15:21.041642 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:15:21.041650 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:15:21.041659 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:15:21.041668 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:15:21.041677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:15:21.041686 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:15:21.041698 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:15:21.041707 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:15:21.041716 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:15:21.041752 systemd-journald[192]: Collecting audit messages is disabled. Sep 13 00:15:21.041777 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:15:21.041789 systemd-journald[192]: Journal started Sep 13 00:15:21.041822 systemd-journald[192]: Runtime Journal (/run/log/journal/0c01f2fbf27648e989f3da84cb986edf) is 6.0M, max 48.4M, 42.3M free. Sep 13 00:15:21.033850 systemd-modules-load[194]: Inserted module 'overlay' Sep 13 00:15:21.091294 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:15:21.091327 kernel: Bridge firewalling registered Sep 13 00:15:21.091339 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:15:21.064907 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 13 00:15:21.091680 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:15:21.108121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:15:21.109589 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:15:21.112659 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:15:21.115373 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:15:21.119006 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:15:21.126001 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:15:21.129147 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:15:21.136591 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:15:21.180216 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:15:21.202792 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:15:21.211045 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:15:21.220156 systemd-resolved[220]: Positive Trust Anchors: Sep 13 00:15:21.220182 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:15:21.220225 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:15:21.229797 dracut-cmdline[232]: dracut-dracut-053 Sep 13 00:15:21.231833 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:15:21.238725 systemd-resolved[220]: Defaulting to hostname 'linux'. Sep 13 00:15:21.240997 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:15:21.243345 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:15:21.342893 kernel: SCSI subsystem initialized Sep 13 00:15:21.354861 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:15:21.365846 kernel: iscsi: registered transport (tcp) Sep 13 00:15:21.389949 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:15:21.390048 kernel: QLogic iSCSI HBA Driver Sep 13 00:15:21.461554 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:15:21.471107 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:15:21.502889 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:15:21.502990 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:15:21.503009 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:15:21.551860 kernel: raid6: avx2x4 gen() 27212 MB/s Sep 13 00:15:21.612881 kernel: raid6: avx2x2 gen() 21184 MB/s Sep 13 00:15:21.630150 kernel: raid6: avx2x1 gen() 16473 MB/s Sep 13 00:15:21.630242 kernel: raid6: using algorithm avx2x4 gen() 27212 MB/s Sep 13 00:15:21.649322 kernel: raid6: .... xor() 5647 MB/s, rmw enabled Sep 13 00:15:21.649425 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:15:21.676867 kernel: xor: automatically using best checksumming function avx Sep 13 00:15:21.862847 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:15:21.878674 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:15:21.892042 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:15:21.905106 systemd-udevd[414]: Using default interface naming scheme 'v255'. Sep 13 00:15:21.910231 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:15:21.924039 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:15:21.940139 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Sep 13 00:15:21.978602 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:15:21.991112 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:15:22.072110 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:15:22.083212 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:15:22.101073 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:15:22.103733 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:15:22.108505 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:15:22.110962 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:15:22.120101 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:15:22.131859 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 00:15:22.133838 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:15:22.137976 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:15:22.145667 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:15:22.154696 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:15:22.158221 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:15:22.158260 kernel: AES CTR mode by8 optimization enabled Sep 13 00:15:22.154868 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:15:22.159027 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:15:22.168940 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:15:22.168968 kernel: GPT:9289727 != 19775487 Sep 13 00:15:22.168992 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:15:22.169007 kernel: GPT:9289727 != 19775487 Sep 13 00:15:22.169020 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:15:22.169034 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:15:22.160332 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:15:22.164153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:15:22.172796 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:15:22.210870 kernel: libata version 3.00 loaded. Sep 13 00:15:22.214228 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:15:22.234130 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (473) Sep 13 00:15:22.234208 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:15:22.234467 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:15:22.236836 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (458) Sep 13 00:15:22.238489 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:15:22.293854 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:15:22.294135 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:15:22.294289 kernel: scsi host0: ahci Sep 13 00:15:22.294478 kernel: scsi host1: ahci Sep 13 00:15:22.294639 kernel: scsi host2: ahci Sep 13 00:15:22.294797 kernel: scsi host3: ahci Sep 13 00:15:22.294995 kernel: scsi host4: ahci Sep 13 00:15:22.295160 kernel: scsi host5: ahci Sep 13 00:15:22.295313 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 13 00:15:22.295330 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 13 00:15:22.295340 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 13 00:15:22.295350 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 13 00:15:22.295361 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 13 00:15:22.295371 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 13 00:15:22.298439 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:15:22.308666 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:15:22.320233 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:15:22.320849 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:15:22.327800 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:15:22.376228 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:15:22.379455 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:15:22.393386 disk-uuid[555]: Primary Header is updated. Sep 13 00:15:22.393386 disk-uuid[555]: Secondary Entries is updated. Sep 13 00:15:22.393386 disk-uuid[555]: Secondary Header is updated. Sep 13 00:15:22.397909 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:15:22.406856 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:15:22.410587 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:15:22.557653 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:15:22.557750 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:15:22.557768 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:15:22.558848 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:15:22.559842 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:15:22.560836 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:15:22.561852 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:15:22.563226 kernel: ata3.00: applying bridge limits Sep 13 00:15:22.563249 kernel: ata3.00: configured for UDMA/100 Sep 13 00:15:22.563842 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:15:22.614860 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:15:22.615164 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:15:22.629156 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:15:23.411872 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:15:23.412847 disk-uuid[557]: The operation has completed successfully. Sep 13 00:15:23.453434 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:15:23.453596 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:15:23.481220 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:15:23.486951 sh[591]: Success Sep 13 00:15:23.503877 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:15:23.550346 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:15:23.564722 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:15:23.567689 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:15:23.585095 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:15:23.585172 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:15:23.585190 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:15:23.586187 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:15:23.586982 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:15:23.594407 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:15:23.596092 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:15:23.603269 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:15:23.606688 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:15:23.615849 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:15:23.615905 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:15:23.615921 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:15:23.619846 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:15:23.632736 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:15:23.635256 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:15:23.652648 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:15:23.661179 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:15:23.997366 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:15:24.115305 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:15:24.118541 ignition[673]: Ignition 2.19.0 Sep 13 00:15:24.118552 ignition[673]: Stage: fetch-offline Sep 13 00:15:24.118605 ignition[673]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:15:24.118618 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:15:24.118780 ignition[673]: parsed url from cmdline: "" Sep 13 00:15:24.118787 ignition[673]: no config URL provided Sep 13 00:15:24.118794 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:15:24.118821 ignition[673]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:15:24.118860 ignition[673]: op(1): [started] loading QEMU firmware config module Sep 13 00:15:24.118867 ignition[673]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:15:24.139260 ignition[673]: op(1): [finished] loading QEMU firmware config module Sep 13 00:15:24.146358 systemd-networkd[777]: lo: Link UP Sep 13 00:15:24.146380 systemd-networkd[777]: lo: Gained carrier Sep 13 00:15:24.148858 systemd-networkd[777]: Enumeration completed Sep 13 00:15:24.148996 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:15:24.149459 systemd[1]: Reached target network.target - Network. Sep 13 00:15:24.150267 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:15:24.150273 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:15:24.154350 systemd-networkd[777]: eth0: Link UP Sep 13 00:15:24.154355 systemd-networkd[777]: eth0: Gained carrier Sep 13 00:15:24.154372 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:15:24.173916 systemd-networkd[777]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:15:24.198247 ignition[673]: parsing config with SHA512: 0c3dcbda940c575814b38d5323fcdaac6db6a8bce3a13c641132feecbf10e4579d76fb0b675b7fe511845a6bc4d25a4d3b1f5249715da2f9be080077ffb22b83 Sep 13 00:15:24.203361 unknown[673]: fetched base config from "system" Sep 13 00:15:24.203396 unknown[673]: fetched user config from "qemu" Sep 13 00:15:24.209116 ignition[673]: fetch-offline: fetch-offline passed Sep 13 00:15:24.210682 ignition[673]: Ignition finished successfully Sep 13 00:15:24.210897 systemd-resolved[220]: Detected conflict on linux IN A 10.0.0.139 Sep 13 00:15:24.210914 systemd-resolved[220]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Sep 13 00:15:24.213710 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:15:24.215534 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:15:24.228300 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:15:24.245364 ignition[783]: Ignition 2.19.0 Sep 13 00:15:24.245385 ignition[783]: Stage: kargs Sep 13 00:15:24.245588 ignition[783]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:15:24.245601 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:15:24.246399 ignition[783]: kargs: kargs passed Sep 13 00:15:24.246449 ignition[783]: Ignition finished successfully Sep 13 00:15:24.253903 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:15:24.264037 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:15:24.284399 ignition[791]: Ignition 2.19.0 Sep 13 00:15:24.284416 ignition[791]: Stage: disks Sep 13 00:15:24.284656 ignition[791]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:15:24.284674 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:15:24.285772 ignition[791]: disks: disks passed Sep 13 00:15:24.285864 ignition[791]: Ignition finished successfully Sep 13 00:15:24.293460 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:15:24.294464 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:15:24.296234 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:15:24.298682 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:15:24.302030 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:15:24.304222 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:15:24.313028 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:15:24.327456 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:15:24.334387 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:15:24.345987 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:15:24.436848 kernel: EXT4-fs (vda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:15:24.438064 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:15:24.439735 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:15:24.449936 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:15:24.452073 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:15:24.453575 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:15:24.459821 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (809) Sep 13 00:15:24.453631 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:15:24.465397 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:15:24.465416 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:15:24.465428 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:15:24.453663 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:15:24.461780 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:15:24.466531 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:15:24.470835 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:15:24.473763 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:15:24.507752 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:15:24.512578 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:15:24.517100 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:15:24.522331 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:15:24.619425 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:15:24.635990 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:15:24.637955 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:15:24.645826 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:15:24.647782 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:15:24.666237 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:15:24.920120 ignition[927]: INFO : Ignition 2.19.0 Sep 13 00:15:24.920120 ignition[927]: INFO : Stage: mount Sep 13 00:15:24.922177 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:15:24.922177 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:15:24.922177 ignition[927]: INFO : mount: mount passed Sep 13 00:15:24.922177 ignition[927]: INFO : Ignition finished successfully Sep 13 00:15:24.923436 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:15:24.930045 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:15:24.939648 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:15:24.952826 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Sep 13 00:15:24.952869 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:15:24.954597 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:15:24.954626 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:15:24.957833 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:15:24.959544 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:15:24.983073 ignition[954]: INFO : Ignition 2.19.0 Sep 13 00:15:24.983073 ignition[954]: INFO : Stage: files Sep 13 00:15:24.984990 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:15:24.984990 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:15:24.984990 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:15:24.988849 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:15:24.988849 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:15:24.988849 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:15:24.988849 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:15:24.994707 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:15:24.994707 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:15:24.994707 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 00:15:24.989316 unknown[954]: wrote ssh authorized keys file for user: core Sep 13 00:15:25.106502 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:15:25.338139 systemd-networkd[777]: eth0: Gained IPv6LL Sep 13 00:15:25.908097 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:15:25.910644 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:15:25.912797 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:15:25.912797 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:15:25.916664 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:15:25.916664 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:15:25.920141 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:15:25.920141 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:15:25.924079 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:15:25.926306 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:15:25.928544 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:15:25.928544 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:15:25.933474 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:15:25.933474 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:15:25.938073 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:15:26.261917 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:15:27.221486 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:15:27.221486 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:15:27.227190 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:15:27.230023 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:15:27.230023 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:15:27.230023 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 00:15:27.230023 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:15:27.230023 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:15:27.230023 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 00:15:27.230023 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:15:27.267541 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:15:27.273476 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:15:27.275509 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:15:27.276977 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:15:27.278324 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:15:27.279915 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:15:27.284318 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:15:27.287300 ignition[954]: INFO : files: files passed Sep 13 00:15:27.288026 ignition[954]: INFO : Ignition finished successfully Sep 13 00:15:27.291182 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:15:27.306186 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:15:27.307976 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:15:27.316241 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:15:27.317299 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:15:27.321394 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 00:15:27.326058 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:15:27.326058 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:15:27.329426 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:15:27.333210 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:15:27.335978 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:15:27.349042 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:15:27.378353 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:15:27.378558 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:15:27.407295 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:15:27.409690 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:15:27.412071 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:15:27.421335 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:15:27.437564 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:15:27.446256 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:15:27.457142 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:15:27.460181 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:15:27.463044 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:15:27.465420 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:15:27.466695 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:15:27.469615 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:15:27.472034 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:15:27.474257 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:15:27.476846 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:15:27.479579 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:15:27.482324 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:15:27.484464 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:15:27.487312 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:15:27.490036 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:15:27.492469 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:15:27.494405 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:15:27.496379 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:15:27.499324 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:15:27.502001 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:15:27.503570 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:15:27.505161 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:15:27.509580 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:15:27.511039 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:15:27.514151 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:15:27.514584 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:15:27.519456 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:15:27.523082 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:15:27.527440 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:15:27.531191 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:15:27.533431 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:15:27.535970 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:15:27.536156 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:15:27.538817 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:15:27.540931 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:15:27.543303 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:15:27.544606 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:15:27.547486 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:15:27.547631 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:15:27.564181 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:15:27.566591 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:15:27.566815 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:15:27.573131 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:15:27.575369 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:15:27.576744 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:15:27.579794 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:15:27.581187 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:15:27.589446 ignition[1008]: INFO : Ignition 2.19.0 Sep 13 00:15:27.589446 ignition[1008]: INFO : Stage: umount Sep 13 00:15:27.595427 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:15:27.595427 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:15:27.595427 ignition[1008]: INFO : umount: umount passed Sep 13 00:15:27.595427 ignition[1008]: INFO : Ignition finished successfully Sep 13 00:15:27.590266 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:15:27.590438 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:15:27.602527 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:15:27.602710 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:15:27.607187 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:15:27.609405 systemd[1]: Stopped target network.target - Network. Sep 13 00:15:27.611379 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:15:27.611456 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:15:27.613767 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:15:27.616016 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:15:27.618071 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:15:27.618151 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:15:27.621262 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:15:27.622319 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:15:27.625087 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:15:27.627857 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:15:27.631906 systemd-networkd[777]: eth0: DHCPv6 lease lost Sep 13 00:15:27.635137 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:15:27.635560 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:15:27.637451 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:15:27.637511 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:15:27.652062 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:15:27.652589 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:15:27.652695 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:15:27.653457 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:15:27.654036 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:15:27.654212 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:15:27.665477 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:15:27.665582 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:15:27.666934 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:15:27.667001 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:15:27.667776 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:15:27.667863 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:15:27.679480 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:15:27.679705 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:15:27.682425 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:15:27.682590 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:15:27.686285 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:15:27.686420 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:15:27.688020 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:15:27.688086 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:15:27.690478 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:15:27.690566 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:15:27.693303 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:15:27.693362 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:15:27.695412 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:15:27.695478 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:15:27.709203 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:15:27.710450 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:15:27.710550 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:15:27.713004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:15:27.713074 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:15:27.719784 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:15:27.719950 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:15:28.043388 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:15:28.043592 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:15:28.048074 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:15:28.049903 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:15:28.049990 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:15:28.061192 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:15:28.072534 systemd[1]: Switching root. Sep 13 00:15:28.111198 systemd-journald[192]: Journal stopped Sep 13 00:15:31.097192 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 13 00:15:31.097272 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:15:31.097293 kernel: SELinux: policy capability open_perms=1 Sep 13 00:15:31.097304 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:15:31.097317 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:15:31.097335 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:15:31.097347 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:15:31.097365 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:15:31.097376 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:15:31.097390 kernel: audit: type=1403 audit(1757722529.543:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:15:31.097411 systemd[1]: Successfully loaded SELinux policy in 85.983ms. Sep 13 00:15:31.097441 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.232ms. Sep 13 00:15:31.097456 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:15:31.097468 systemd[1]: Detected virtualization kvm. Sep 13 00:15:31.097481 systemd[1]: Detected architecture x86-64. Sep 13 00:15:31.097493 systemd[1]: Detected first boot. Sep 13 00:15:31.097506 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:15:31.097518 zram_generator::config[1053]: No configuration found. Sep 13 00:15:31.097532 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:15:31.097557 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:15:31.097571 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:15:31.097584 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:15:31.097597 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:15:31.097609 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:15:31.097621 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:15:31.097634 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:15:31.097647 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:15:31.097662 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:15:31.097675 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:15:31.097688 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:15:31.097701 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:15:31.097714 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:15:31.097726 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:15:31.097739 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:15:31.097751 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:15:31.097767 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:15:31.097787 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:15:31.097818 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:15:31.097836 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:15:31.097849 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:15:31.097862 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:15:31.097874 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:15:31.097886 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:15:31.097907 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:15:31.097927 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:15:31.097947 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:15:31.097962 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:15:31.097977 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:15:31.097993 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:15:31.098008 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:15:31.098023 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:15:31.098042 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:15:31.098057 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:15:31.098083 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:15:31.098099 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:15:31.098115 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:15:31.098131 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:15:31.098148 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:15:31.098163 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:15:31.098179 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:15:31.098195 systemd[1]: Reached target machines.target - Containers. Sep 13 00:15:31.098216 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:15:31.098232 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:15:31.098247 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:15:31.098264 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:15:31.098280 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:15:31.098297 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:15:31.098314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:15:31.098331 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:15:31.098348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:15:31.098376 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:15:31.098394 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:15:31.098413 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:15:31.098431 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:15:31.098448 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:15:31.098465 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:15:31.098511 systemd-journald[1116]: Collecting audit messages is disabled. Sep 13 00:15:31.098547 kernel: loop: module loaded Sep 13 00:15:31.098575 kernel: fuse: init (API version 7.39) Sep 13 00:15:31.098592 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:15:31.098610 systemd-journald[1116]: Journal started Sep 13 00:15:31.098640 systemd-journald[1116]: Runtime Journal (/run/log/journal/0c01f2fbf27648e989f3da84cb986edf) is 6.0M, max 48.4M, 42.3M free. Sep 13 00:15:30.559520 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:15:30.577541 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:15:30.578230 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:15:31.103986 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:15:31.118454 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:15:31.128568 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:15:31.130799 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:15:31.130861 systemd[1]: Stopped verity-setup.service. Sep 13 00:15:31.133857 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:15:31.164589 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:15:31.166202 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:15:31.166851 kernel: ACPI: bus type drm_connector registered Sep 13 00:15:31.168339 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:15:31.169780 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:15:31.171042 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:15:31.172302 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:15:31.173623 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:15:31.175037 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:15:31.176650 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:15:31.176884 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:15:31.178541 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:15:31.178767 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:15:31.180244 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:15:31.180446 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:15:31.182095 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:15:31.182279 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:15:31.183917 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:15:31.184114 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:15:31.185931 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:15:31.186151 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:15:31.187750 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:15:31.189474 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:15:31.191067 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:15:31.213893 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:15:31.231023 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:15:31.237952 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:15:31.239306 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:15:31.239366 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:15:31.241817 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:15:31.244531 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:15:31.246999 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:15:31.247457 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:15:31.283112 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:15:31.286280 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:15:31.287633 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:15:31.288880 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:15:31.290218 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:15:31.296780 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:15:31.315058 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:15:31.318614 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:15:31.320512 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:15:31.322055 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:15:31.323700 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:15:31.330260 systemd-journald[1116]: Time spent on flushing to /var/log/journal/0c01f2fbf27648e989f3da84cb986edf is 29.869ms for 955 entries. Sep 13 00:15:31.330260 systemd-journald[1116]: System Journal (/var/log/journal/0c01f2fbf27648e989f3da84cb986edf) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:15:32.692606 systemd-journald[1116]: Received client request to flush runtime journal. Sep 13 00:15:32.692742 kernel: loop0: detected capacity change from 0 to 140768 Sep 13 00:15:32.692792 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:15:32.692858 kernel: loop1: detected capacity change from 0 to 142488 Sep 13 00:15:32.692886 kernel: loop2: detected capacity change from 0 to 224512 Sep 13 00:15:32.692912 kernel: loop3: detected capacity change from 0 to 140768 Sep 13 00:15:32.692943 kernel: loop4: detected capacity change from 0 to 142488 Sep 13 00:15:32.693029 kernel: loop5: detected capacity change from 0 to 224512 Sep 13 00:15:32.693070 zram_generator::config[1207]: No configuration found. Sep 13 00:15:31.333759 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:15:31.401185 udevadm[1158]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:15:31.544576 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:15:31.893098 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:15:31.896839 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:15:31.913125 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:15:32.139008 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 00:15:32.139958 (sd-merge)[1181]: Merged extensions into '/usr'. Sep 13 00:15:32.181267 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:15:32.181280 systemd[1]: Reloading... Sep 13 00:15:32.690400 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:15:32.753849 systemd[1]: Reloading finished in 571 ms. Sep 13 00:15:32.795107 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:15:32.840354 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:15:32.842144 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:15:32.861158 systemd[1]: Starting ensure-sysext.service... Sep 13 00:15:32.864475 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:15:32.900307 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:15:32.900571 systemd[1]: Reloading... Sep 13 00:15:33.002937 zram_generator::config[1282]: No configuration found. Sep 13 00:15:33.055393 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:15:33.137172 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:15:33.201609 systemd[1]: Reloading finished in 300 ms. Sep 13 00:15:33.226761 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:15:33.243642 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:15:33.262266 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:15:33.265349 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:15:33.275274 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:15:33.275571 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:15:33.281111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:15:33.296994 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:15:33.300203 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:15:33.301417 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:15:33.301549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:15:33.303564 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:15:33.303730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:15:33.303923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:15:33.304014 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:15:33.309347 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:15:33.309562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:15:33.313570 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:15:33.318378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:15:33.318560 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:15:33.319512 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:15:33.319718 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:15:33.321603 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:15:33.321794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:15:33.323775 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:15:33.324056 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:15:33.328155 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:15:33.328344 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:15:33.331081 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:15:33.331998 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:15:33.335477 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Sep 13 00:15:33.335506 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Sep 13 00:15:33.335834 systemd[1]: Finished ensure-sysext.service. Sep 13 00:15:33.339475 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:15:33.340254 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:15:33.341347 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:15:33.341715 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Sep 13 00:15:33.342087 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Sep 13 00:15:33.344230 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:15:33.346492 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:15:33.346569 systemd-tmpfiles[1316]: Skipping /boot Sep 13 00:15:33.347604 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:15:33.347719 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:15:33.367270 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:15:33.367295 systemd-tmpfiles[1316]: Skipping /boot Sep 13 00:15:33.409695 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:15:33.428162 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:15:33.520415 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:15:33.525096 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:15:33.529325 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:15:33.538015 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:15:33.552305 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:15:33.559301 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:15:33.584605 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:15:33.588778 augenrules[1353]: No rules Sep 13 00:15:33.596222 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:15:33.598076 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:15:33.638314 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:15:33.640769 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:15:33.654548 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:15:33.664322 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:15:33.666034 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:15:33.666317 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:15:33.693397 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:15:33.705792 systemd-udevd[1363]: Using default interface naming scheme 'v255'. Sep 13 00:15:33.725595 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:15:33.751194 systemd-resolved[1341]: Positive Trust Anchors: Sep 13 00:15:33.751211 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:15:33.751222 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:15:33.751243 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:15:33.755673 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:15:33.758569 systemd-resolved[1341]: Defaulting to hostname 'linux'. Sep 13 00:15:33.759955 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:15:33.761089 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:15:33.762331 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:15:33.779928 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1373) Sep 13 00:15:33.820103 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:15:33.830992 systemd-networkd[1384]: lo: Link UP Sep 13 00:15:33.831006 systemd-networkd[1384]: lo: Gained carrier Sep 13 00:15:33.834082 systemd-networkd[1384]: Enumeration completed Sep 13 00:15:33.834187 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:15:33.835672 systemd[1]: Reached target network.target - Network. Sep 13 00:15:33.845662 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:15:33.845678 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:15:33.847145 systemd-networkd[1384]: eth0: Link UP Sep 13 00:15:33.847158 systemd-networkd[1384]: eth0: Gained carrier Sep 13 00:15:33.847176 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:15:33.853316 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:15:33.861379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:15:33.868190 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:15:33.869854 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:15:33.872687 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Sep 13 00:15:33.873871 systemd-timesyncd[1344]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:15:33.873937 systemd-timesyncd[1344]: Initial clock synchronization to Sat 2025-09-13 00:15:34.207917 UTC. Sep 13 00:15:33.874430 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:15:33.881086 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:15:33.895985 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:15:33.896528 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:15:33.897244 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:15:33.897271 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:15:33.906852 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:15:33.916789 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:15:34.009965 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:15:34.048783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:15:34.143022 kernel: kvm_amd: TSC scaling supported Sep 13 00:15:34.143193 kernel: kvm_amd: Nested Virtualization enabled Sep 13 00:15:34.143457 kernel: kvm_amd: Nested Paging enabled Sep 13 00:15:34.144316 kernel: kvm_amd: LBR virtualization supported Sep 13 00:15:34.144369 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 00:15:34.145867 kernel: kvm_amd: Virtual GIF supported Sep 13 00:15:34.174089 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:15:34.291344 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:15:34.320314 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:15:34.336267 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:15:34.348970 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:15:34.410217 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:15:34.412328 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:15:34.413677 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:15:34.415160 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:15:34.416792 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:15:34.418645 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:15:34.420125 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:15:34.421811 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:15:34.423416 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:15:34.423460 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:15:34.424741 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:15:34.427501 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:15:34.431038 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:15:34.442962 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:15:34.447232 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:15:34.449631 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:15:34.451439 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:15:34.452748 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:15:34.454100 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:15:34.454137 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:15:34.455987 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:15:34.459124 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:15:34.465995 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:15:34.471418 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:15:34.472883 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:15:34.479316 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:15:34.478998 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:15:34.484873 jq[1423]: false Sep 13 00:15:34.485053 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:15:34.489092 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:15:34.493138 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:15:34.502196 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:15:34.504305 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:15:34.505165 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:15:34.507663 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:15:34.513082 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:15:34.521015 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:15:34.523135 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:15:34.523427 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:15:34.526426 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:15:34.527040 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:15:34.531965 extend-filesystems[1424]: Found loop3 Sep 13 00:15:34.533512 extend-filesystems[1424]: Found loop4 Sep 13 00:15:34.533512 extend-filesystems[1424]: Found loop5 Sep 13 00:15:34.533512 extend-filesystems[1424]: Found sr0 Sep 13 00:15:34.533512 extend-filesystems[1424]: Found vda Sep 13 00:15:34.533512 extend-filesystems[1424]: Found vda1 Sep 13 00:15:34.533512 extend-filesystems[1424]: Found vda2 Sep 13 00:15:34.533512 extend-filesystems[1424]: Found vda3 Sep 13 00:15:34.533512 extend-filesystems[1424]: Found usr Sep 13 00:15:34.533512 extend-filesystems[1424]: Found vda4 Sep 13 00:15:34.533512 extend-filesystems[1424]: Found vda6 Sep 13 00:15:34.533512 extend-filesystems[1424]: Found vda7 Sep 13 00:15:34.533512 extend-filesystems[1424]: Found vda9 Sep 13 00:15:34.533512 extend-filesystems[1424]: Checking size of /dev/vda9 Sep 13 00:15:34.532837 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:15:34.551236 jq[1437]: true Sep 13 00:15:34.535011 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:15:34.553016 dbus-daemon[1422]: [system] SELinux support is enabled Sep 13 00:15:34.554299 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:15:34.569884 jq[1444]: true Sep 13 00:15:34.571455 extend-filesystems[1424]: Resized partition /dev/vda9 Sep 13 00:15:34.580981 update_engine[1434]: I20250913 00:15:34.579633 1434 main.cc:92] Flatcar Update Engine starting Sep 13 00:15:34.587921 update_engine[1434]: I20250913 00:15:34.585987 1434 update_check_scheduler.cc:74] Next update check in 5m47s Sep 13 00:15:34.588426 extend-filesystems[1459]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:15:34.606352 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:15:34.606432 tar[1439]: linux-amd64/LICENSE Sep 13 00:15:34.606432 tar[1439]: linux-amd64/helm Sep 13 00:15:34.589170 (ntainerd)[1450]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:15:34.595320 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:15:34.595358 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:15:34.599062 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:15:34.599092 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:15:34.602702 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:15:34.604233 systemd-logind[1432]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:15:34.604257 systemd-logind[1432]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:15:34.604909 systemd-logind[1432]: New seat seat0. Sep 13 00:15:34.607598 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:15:34.629247 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:15:34.636402 sshd_keygen[1447]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:15:34.749451 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:15:34.749642 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1387) Sep 13 00:15:34.771436 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:15:34.797873 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:15:34.797873 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:15:34.797873 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:15:34.803617 extend-filesystems[1424]: Resized filesystem in /dev/vda9 Sep 13 00:15:34.817439 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:15:34.822982 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:15:34.824007 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:15:34.823469 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:15:34.828448 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:15:34.853255 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:15:34.856915 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:15:34.858210 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:15:34.869341 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:15:34.875504 locksmithd[1462]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:15:34.926176 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:15:34.948420 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:15:34.959401 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:15:34.961560 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:15:35.284754 containerd[1450]: time="2025-09-13T00:15:35.284576391Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:15:35.465002 systemd-networkd[1384]: eth0: Gained IPv6LL Sep 13 00:15:35.477574 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:15:35.479790 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:15:35.481829 containerd[1450]: time="2025-09-13T00:15:35.481747705Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:15:35.484967 containerd[1450]: time="2025-09-13T00:15:35.484885935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:15:35.484967 containerd[1450]: time="2025-09-13T00:15:35.484954933Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:15:35.485122 containerd[1450]: time="2025-09-13T00:15:35.484987698Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:15:35.485330 containerd[1450]: time="2025-09-13T00:15:35.485291442Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:15:35.485330 containerd[1450]: time="2025-09-13T00:15:35.485325026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:15:35.485483 containerd[1450]: time="2025-09-13T00:15:35.485444382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:15:35.485483 containerd[1450]: time="2025-09-13T00:15:35.485473991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:15:35.485847 containerd[1450]: time="2025-09-13T00:15:35.485791089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:15:35.485847 containerd[1450]: time="2025-09-13T00:15:35.485819606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:15:35.485937 containerd[1450]: time="2025-09-13T00:15:35.485858467Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:15:35.485937 containerd[1450]: time="2025-09-13T00:15:35.485874762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:15:35.486048 containerd[1450]: time="2025-09-13T00:15:35.486019468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:15:35.486431 containerd[1450]: time="2025-09-13T00:15:35.486388772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:15:35.486653 containerd[1450]: time="2025-09-13T00:15:35.486589962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:15:35.486653 containerd[1450]: time="2025-09-13T00:15:35.486648950Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:15:35.486869 containerd[1450]: time="2025-09-13T00:15:35.486818133Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:15:35.487110 containerd[1450]: time="2025-09-13T00:15:35.487070583Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:15:35.497022 containerd[1450]: time="2025-09-13T00:15:35.496936150Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:15:35.497225 containerd[1450]: time="2025-09-13T00:15:35.497078374Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:15:35.497225 containerd[1450]: time="2025-09-13T00:15:35.497106881Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:15:35.497225 containerd[1450]: time="2025-09-13T00:15:35.497187697Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:15:35.497311 containerd[1450]: time="2025-09-13T00:15:35.497253041Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:15:35.497717 containerd[1450]: time="2025-09-13T00:15:35.497685238Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:15:35.498452 containerd[1450]: time="2025-09-13T00:15:35.498320296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:15:35.498431 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 00:15:35.498644 containerd[1450]: time="2025-09-13T00:15:35.498622814Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:15:35.498686 containerd[1450]: time="2025-09-13T00:15:35.498648797Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:15:35.498686 containerd[1450]: time="2025-09-13T00:15:35.498674833Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:15:35.498738 containerd[1450]: time="2025-09-13T00:15:35.498696641Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:15:35.498738 containerd[1450]: time="2025-09-13T00:15:35.498718357Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:15:35.498790 containerd[1450]: time="2025-09-13T00:15:35.498742606Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:15:35.498849 containerd[1450]: time="2025-09-13T00:15:35.498790844Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:15:35.498897 containerd[1450]: time="2025-09-13T00:15:35.498822914Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:15:35.498931 containerd[1450]: time="2025-09-13T00:15:35.498906658Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:15:35.498960 containerd[1450]: time="2025-09-13T00:15:35.498929786Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:15:35.498960 containerd[1450]: time="2025-09-13T00:15:35.498954222Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:15:35.499049 containerd[1450]: time="2025-09-13T00:15:35.499013781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499049 containerd[1450]: time="2025-09-13T00:15:35.499042402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499139 containerd[1450]: time="2025-09-13T00:15:35.499058499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499139 containerd[1450]: time="2025-09-13T00:15:35.499074648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499139 containerd[1450]: time="2025-09-13T00:15:35.499096633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499139 containerd[1450]: time="2025-09-13T00:15:35.499127893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499253 containerd[1450]: time="2025-09-13T00:15:35.499147053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499253 containerd[1450]: time="2025-09-13T00:15:35.499167086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499253 containerd[1450]: time="2025-09-13T00:15:35.499184907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499253 containerd[1450]: time="2025-09-13T00:15:35.499205355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499253 containerd[1450]: time="2025-09-13T00:15:35.499224349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499253 containerd[1450]: time="2025-09-13T00:15:35.499243624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499417 containerd[1450]: time="2025-09-13T00:15:35.499265879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499417 containerd[1450]: time="2025-09-13T00:15:35.499290761Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:15:35.499417 containerd[1450]: time="2025-09-13T00:15:35.499329010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499417 containerd[1450]: time="2025-09-13T00:15:35.499344547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.499417 containerd[1450]: time="2025-09-13T00:15:35.499379575Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:15:35.499769 containerd[1450]: time="2025-09-13T00:15:35.499486957Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:15:35.499769 containerd[1450]: time="2025-09-13T00:15:35.499527303Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:15:35.499769 containerd[1450]: time="2025-09-13T00:15:35.499544926Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:15:35.500027 containerd[1450]: time="2025-09-13T00:15:35.499565011Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:15:35.500027 containerd[1450]: time="2025-09-13T00:15:35.500019795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.500090 containerd[1450]: time="2025-09-13T00:15:35.500048874Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:15:35.500090 containerd[1450]: time="2025-09-13T00:15:35.500063164Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:15:35.500090 containerd[1450]: time="2025-09-13T00:15:35.500080777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:15:35.500685 containerd[1450]: time="2025-09-13T00:15:35.500581826Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:15:35.500685 containerd[1450]: time="2025-09-13T00:15:35.500673932Z" level=info msg="Connect containerd service" Sep 13 00:15:35.501045 containerd[1450]: time="2025-09-13T00:15:35.500738558Z" level=info msg="using legacy CRI server" Sep 13 00:15:35.501045 containerd[1450]: time="2025-09-13T00:15:35.500753097Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:15:35.503392 containerd[1450]: time="2025-09-13T00:15:35.503333283Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:15:35.504314 containerd[1450]: time="2025-09-13T00:15:35.504256591Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:15:35.504472 containerd[1450]: time="2025-09-13T00:15:35.504420893Z" level=info msg="Start subscribing containerd event" Sep 13 00:15:35.504504 containerd[1450]: time="2025-09-13T00:15:35.504493630Z" level=info msg="Start recovering state" Sep 13 00:15:35.504605 containerd[1450]: time="2025-09-13T00:15:35.504581187Z" level=info msg="Start event monitor" Sep 13 00:15:35.504635 containerd[1450]: time="2025-09-13T00:15:35.504613339Z" level=info msg="Start snapshots syncer" Sep 13 00:15:35.504655 containerd[1450]: time="2025-09-13T00:15:35.504638284Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:15:35.504686 containerd[1450]: time="2025-09-13T00:15:35.504655492Z" level=info msg="Start streaming server" Sep 13 00:15:35.505555 containerd[1450]: time="2025-09-13T00:15:35.505532242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:15:35.505621 containerd[1450]: time="2025-09-13T00:15:35.505605541Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:15:35.505778 containerd[1450]: time="2025-09-13T00:15:35.505746830Z" level=info msg="containerd successfully booted in 0.222994s" Sep 13 00:15:35.514716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:15:35.518927 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:15:35.520789 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:15:35.575646 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:15:35.576159 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 00:15:35.580453 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:15:35.584365 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:15:35.701027 tar[1439]: linux-amd64/README.md Sep 13 00:15:35.723687 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:15:37.357379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:15:37.359886 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:15:37.361442 systemd[1]: Startup finished in 1.107s (kernel) + 8.746s (initrd) + 7.901s (userspace) = 17.755s. Sep 13 00:15:37.385548 (kubelet)[1535]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:15:38.246301 kubelet[1535]: E0913 00:15:38.246195 1535 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:15:38.251077 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:15:38.251324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:15:38.251682 systemd[1]: kubelet.service: Consumed 2.321s CPU time. Sep 13 00:15:44.589913 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:15:44.592110 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:34754.service - OpenSSH per-connection server daemon (10.0.0.1:34754). Sep 13 00:15:44.673817 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 34754 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:15:44.676255 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:44.686207 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:15:44.701096 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:15:44.703293 systemd-logind[1432]: New session 1 of user core. Sep 13 00:15:44.718305 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:15:44.729114 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:15:44.732260 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:15:44.860119 systemd[1552]: Queued start job for default target default.target. Sep 13 00:15:44.870720 systemd[1552]: Created slice app.slice - User Application Slice. Sep 13 00:15:44.870753 systemd[1552]: Reached target paths.target - Paths. Sep 13 00:15:44.870769 systemd[1552]: Reached target timers.target - Timers. Sep 13 00:15:44.872767 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:15:44.886807 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:15:44.886983 systemd[1552]: Reached target sockets.target - Sockets. Sep 13 00:15:44.886998 systemd[1552]: Reached target basic.target - Basic System. Sep 13 00:15:44.887061 systemd[1552]: Reached target default.target - Main User Target. Sep 13 00:15:44.887118 systemd[1552]: Startup finished in 147ms. Sep 13 00:15:44.887962 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:15:44.899147 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:15:44.978271 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:34770.service - OpenSSH per-connection server daemon (10.0.0.1:34770). Sep 13 00:15:45.009248 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 34770 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:15:45.011705 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:45.017272 systemd-logind[1432]: New session 2 of user core. Sep 13 00:15:45.026180 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:15:45.087093 sshd[1563]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:45.096897 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:34770.service: Deactivated successfully. Sep 13 00:15:45.099528 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:15:45.102347 systemd-logind[1432]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:15:45.115677 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:34782.service - OpenSSH per-connection server daemon (10.0.0.1:34782). Sep 13 00:15:45.117629 systemd-logind[1432]: Removed session 2. Sep 13 00:15:45.150037 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 34782 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:15:45.152535 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:45.158750 systemd-logind[1432]: New session 3 of user core. Sep 13 00:15:45.169406 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:15:45.226479 sshd[1570]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:45.245279 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:34782.service: Deactivated successfully. Sep 13 00:15:45.247994 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:15:45.250589 systemd-logind[1432]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:15:45.258345 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:34798.service - OpenSSH per-connection server daemon (10.0.0.1:34798). Sep 13 00:15:45.259579 systemd-logind[1432]: Removed session 3. Sep 13 00:15:45.288848 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 34798 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:15:45.291483 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:45.296873 systemd-logind[1432]: New session 4 of user core. Sep 13 00:15:45.313207 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:15:45.372778 sshd[1577]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:45.390013 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:34798.service: Deactivated successfully. Sep 13 00:15:45.392479 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:15:45.394888 systemd-logind[1432]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:15:45.406481 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:34814.service - OpenSSH per-connection server daemon (10.0.0.1:34814). Sep 13 00:15:45.408028 systemd-logind[1432]: Removed session 4. Sep 13 00:15:45.438403 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 34814 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:15:45.440569 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:45.447015 systemd-logind[1432]: New session 5 of user core. Sep 13 00:15:45.457027 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:15:45.523307 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:15:45.523795 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:15:45.549187 sudo[1587]: pam_unix(sudo:session): session closed for user root Sep 13 00:15:45.552098 sshd[1584]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:45.566098 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:34814.service: Deactivated successfully. Sep 13 00:15:45.568852 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:15:45.571056 systemd-logind[1432]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:15:45.582541 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:34820.service - OpenSSH per-connection server daemon (10.0.0.1:34820). Sep 13 00:15:45.584613 systemd-logind[1432]: Removed session 5. Sep 13 00:15:45.618102 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 34820 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:15:45.621000 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:45.626903 systemd-logind[1432]: New session 6 of user core. Sep 13 00:15:45.637193 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:15:45.698181 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:15:45.698634 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:15:45.703869 sudo[1596]: pam_unix(sudo:session): session closed for user root Sep 13 00:15:45.711583 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:15:45.712076 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:15:45.736462 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:15:45.739341 auditctl[1599]: No rules Sep 13 00:15:45.741435 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:15:45.741943 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:15:45.745118 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:15:45.790413 augenrules[1617]: No rules Sep 13 00:15:45.792824 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:15:45.794516 sudo[1595]: pam_unix(sudo:session): session closed for user root Sep 13 00:15:45.797158 sshd[1592]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:45.813234 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:34820.service: Deactivated successfully. Sep 13 00:15:45.817565 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:15:45.821248 systemd-logind[1432]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:15:45.831496 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:34830.service - OpenSSH per-connection server daemon (10.0.0.1:34830). Sep 13 00:15:45.833307 systemd-logind[1432]: Removed session 6. Sep 13 00:15:45.868832 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 34830 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:15:45.871993 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:45.883053 systemd-logind[1432]: New session 7 of user core. Sep 13 00:15:45.900343 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:15:45.962191 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:15:45.962745 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:15:46.708789 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:15:46.709390 (dockerd)[1647]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:15:47.598865 dockerd[1647]: time="2025-09-13T00:15:47.598185115Z" level=info msg="Starting up" Sep 13 00:15:48.396906 systemd[1]: var-lib-docker-metacopy\x2dcheck679743900-merged.mount: Deactivated successfully. Sep 13 00:15:48.398090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:15:48.412113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:15:48.715999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:15:48.723686 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:15:48.816628 kubelet[1677]: E0913 00:15:48.816517 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:15:48.823626 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:15:48.823970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:15:49.704985 dockerd[1647]: time="2025-09-13T00:15:49.704870785Z" level=info msg="Loading containers: start." Sep 13 00:15:49.895206 kernel: Initializing XFRM netlink socket Sep 13 00:15:50.039397 systemd-networkd[1384]: docker0: Link UP Sep 13 00:15:50.066577 dockerd[1647]: time="2025-09-13T00:15:50.066493578Z" level=info msg="Loading containers: done." Sep 13 00:15:50.098638 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1493012196-merged.mount: Deactivated successfully. Sep 13 00:15:50.103339 dockerd[1647]: time="2025-09-13T00:15:50.103277127Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:15:50.103443 dockerd[1647]: time="2025-09-13T00:15:50.103422523Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:15:50.103644 dockerd[1647]: time="2025-09-13T00:15:50.103611443Z" level=info msg="Daemon has completed initialization" Sep 13 00:15:50.161433 dockerd[1647]: time="2025-09-13T00:15:50.161307119Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:15:50.161891 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:15:51.309448 containerd[1450]: time="2025-09-13T00:15:51.309379447Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 00:15:53.697019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792156940.mount: Deactivated successfully. Sep 13 00:15:57.014520 containerd[1450]: time="2025-09-13T00:15:57.014425703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:57.024691 containerd[1450]: time="2025-09-13T00:15:57.024598242Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 13 00:15:57.036994 containerd[1450]: time="2025-09-13T00:15:57.036909821Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:57.052272 containerd[1450]: time="2025-09-13T00:15:57.052178143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:57.053632 containerd[1450]: time="2025-09-13T00:15:57.053554199Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 5.744122784s" Sep 13 00:15:57.053632 containerd[1450]: time="2025-09-13T00:15:57.053616215Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 00:15:57.055040 containerd[1450]: time="2025-09-13T00:15:57.054989330Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 00:15:58.967662 containerd[1450]: time="2025-09-13T00:15:58.967565294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:58.968692 containerd[1450]: time="2025-09-13T00:15:58.968621518Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 13 00:15:58.970188 containerd[1450]: time="2025-09-13T00:15:58.970143644Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:58.973706 containerd[1450]: time="2025-09-13T00:15:58.973666541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:58.974880 containerd[1450]: time="2025-09-13T00:15:58.974776167Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.919730865s" Sep 13 00:15:58.974880 containerd[1450]: time="2025-09-13T00:15:58.974858350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 00:15:58.975697 containerd[1450]: time="2025-09-13T00:15:58.975665406Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 00:15:59.074287 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:15:59.087110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:15:59.324235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:15:59.330474 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:15:59.387053 kubelet[1880]: E0913 00:15:59.386951 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:15:59.392227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:15:59.392534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:16:02.751161 containerd[1450]: time="2025-09-13T00:16:02.751059992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:02.753099 containerd[1450]: time="2025-09-13T00:16:02.753008667Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 13 00:16:02.755990 containerd[1450]: time="2025-09-13T00:16:02.755897367Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:02.760112 containerd[1450]: time="2025-09-13T00:16:02.760047362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:02.761600 containerd[1450]: time="2025-09-13T00:16:02.761538676Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 3.785755458s" Sep 13 00:16:02.761600 containerd[1450]: time="2025-09-13T00:16:02.761589060Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 00:16:02.762526 containerd[1450]: time="2025-09-13T00:16:02.762382113Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 00:16:04.396681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3410623315.mount: Deactivated successfully. Sep 13 00:16:05.316068 containerd[1450]: time="2025-09-13T00:16:05.315965542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:05.317054 containerd[1450]: time="2025-09-13T00:16:05.316985492Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 13 00:16:05.318421 containerd[1450]: time="2025-09-13T00:16:05.318372625Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:05.321621 containerd[1450]: time="2025-09-13T00:16:05.321531977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:05.322220 containerd[1450]: time="2025-09-13T00:16:05.322174770Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.559753537s" Sep 13 00:16:05.322220 containerd[1450]: time="2025-09-13T00:16:05.322214992Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 00:16:05.322895 containerd[1450]: time="2025-09-13T00:16:05.322837152Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:16:06.028926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467924975.mount: Deactivated successfully. Sep 13 00:16:08.186836 containerd[1450]: time="2025-09-13T00:16:08.186659097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:08.190551 containerd[1450]: time="2025-09-13T00:16:08.190132107Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:16:08.193483 containerd[1450]: time="2025-09-13T00:16:08.193332674Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:08.198886 containerd[1450]: time="2025-09-13T00:16:08.198733291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:08.200534 containerd[1450]: time="2025-09-13T00:16:08.200471731Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.877601152s" Sep 13 00:16:08.200534 containerd[1450]: time="2025-09-13T00:16:08.200521906Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:16:08.201203 containerd[1450]: time="2025-09-13T00:16:08.201174679Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:16:09.643084 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:16:09.662317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:09.873349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:09.881972 (kubelet)[1960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:16:10.249880 kubelet[1960]: E0913 00:16:10.249788 1960 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:16:10.256132 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:16:10.256392 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:16:10.399276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2090541455.mount: Deactivated successfully. Sep 13 00:16:10.411329 containerd[1450]: time="2025-09-13T00:16:10.411228081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:10.413418 containerd[1450]: time="2025-09-13T00:16:10.413345164Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:16:10.415346 containerd[1450]: time="2025-09-13T00:16:10.415273396Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:10.418127 containerd[1450]: time="2025-09-13T00:16:10.418045704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:10.419777 containerd[1450]: time="2025-09-13T00:16:10.419466828Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.218250669s" Sep 13 00:16:10.419777 containerd[1450]: time="2025-09-13T00:16:10.419536458Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:16:10.420578 containerd[1450]: time="2025-09-13T00:16:10.420282639Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 00:16:11.109495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2767036899.mount: Deactivated successfully. Sep 13 00:16:18.105521 containerd[1450]: time="2025-09-13T00:16:18.105356503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:18.107829 containerd[1450]: time="2025-09-13T00:16:18.107743774Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 13 00:16:18.110950 containerd[1450]: time="2025-09-13T00:16:18.110833493Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:18.118180 containerd[1450]: time="2025-09-13T00:16:18.118103670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:18.119471 containerd[1450]: time="2025-09-13T00:16:18.119404662Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 7.699067706s" Sep 13 00:16:18.119471 containerd[1450]: time="2025-09-13T00:16:18.119458117Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 00:16:19.617287 update_engine[1434]: I20250913 00:16:19.617136 1434 update_attempter.cc:509] Updating boot flags... Sep 13 00:16:19.662870 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2055) Sep 13 00:16:19.740842 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2054) Sep 13 00:16:19.804026 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2054) Sep 13 00:16:20.366721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 00:16:20.378260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:20.617732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:20.625309 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:16:20.766670 kubelet[2071]: E0913 00:16:20.766499 2071 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:16:20.771926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:16:20.772221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:16:21.504260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:21.513115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:21.543550 systemd[1]: Reloading requested from client PID 2087 ('systemctl') (unit session-7.scope)... Sep 13 00:16:21.543572 systemd[1]: Reloading... Sep 13 00:16:21.632870 zram_generator::config[2126]: No configuration found. Sep 13 00:16:22.311174 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:16:22.396375 systemd[1]: Reloading finished in 852 ms. Sep 13 00:16:22.454816 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:22.461165 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:16:22.461513 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:22.479401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:22.702619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:22.712210 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:16:22.772428 kubelet[2177]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:16:22.772428 kubelet[2177]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:16:22.772428 kubelet[2177]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:16:22.772428 kubelet[2177]: I0913 00:16:22.772094 2177 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:16:24.550384 kubelet[2177]: I0913 00:16:24.550319 2177 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:16:24.550384 kubelet[2177]: I0913 00:16:24.550362 2177 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:16:24.551168 kubelet[2177]: I0913 00:16:24.550756 2177 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:16:24.650312 kubelet[2177]: E0913 00:16:24.650250 2177 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:24.652894 kubelet[2177]: I0913 00:16:24.652855 2177 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:16:24.691752 kubelet[2177]: E0913 00:16:24.691677 2177 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:16:24.691752 kubelet[2177]: I0913 00:16:24.691742 2177 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:16:24.697685 kubelet[2177]: I0913 00:16:24.697643 2177 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:16:24.754921 kubelet[2177]: I0913 00:16:24.754761 2177 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:16:24.755212 kubelet[2177]: I0913 00:16:24.754905 2177 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:16:24.755328 kubelet[2177]: I0913 00:16:24.755234 2177 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:16:24.755328 kubelet[2177]: I0913 00:16:24.755249 2177 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:16:24.755569 kubelet[2177]: I0913 00:16:24.755535 2177 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:16:24.783783 kubelet[2177]: I0913 00:16:24.783711 2177 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:16:24.783783 kubelet[2177]: I0913 00:16:24.783775 2177 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:16:24.784010 kubelet[2177]: I0913 00:16:24.783830 2177 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:16:24.784010 kubelet[2177]: I0913 00:16:24.783868 2177 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:16:24.786614 kubelet[2177]: W0913 00:16:24.786438 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 13 00:16:24.786614 kubelet[2177]: E0913 00:16:24.786524 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:24.786768 kubelet[2177]: W0913 00:16:24.786615 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 13 00:16:24.786768 kubelet[2177]: E0913 00:16:24.786664 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:24.788102 kubelet[2177]: I0913 00:16:24.788080 2177 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:16:24.788527 kubelet[2177]: I0913 00:16:24.788511 2177 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:16:24.789202 kubelet[2177]: W0913 00:16:24.789180 2177 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:16:24.803689 kubelet[2177]: I0913 00:16:24.803485 2177 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:16:24.803689 kubelet[2177]: I0913 00:16:24.803574 2177 server.go:1287] "Started kubelet" Sep 13 00:16:24.805793 kubelet[2177]: I0913 00:16:24.805685 2177 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:16:24.805793 kubelet[2177]: I0913 00:16:24.805731 2177 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:16:24.806209 kubelet[2177]: I0913 00:16:24.806174 2177 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:16:24.806302 kubelet[2177]: I0913 00:16:24.806254 2177 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:16:24.807424 kubelet[2177]: I0913 00:16:24.807376 2177 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:16:24.811101 kubelet[2177]: I0913 00:16:24.811051 2177 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:16:24.830433 kubelet[2177]: E0913 00:16:24.817845 2177 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864af60c8e54945 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:16:24.803526981 +0000 UTC m=+2.086638348,LastTimestamp:2025-09-13 00:16:24.803526981 +0000 UTC m=+2.086638348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:16:24.830433 kubelet[2177]: E0913 00:16:24.820195 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:24.830433 kubelet[2177]: I0913 00:16:24.820238 2177 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:16:24.830433 kubelet[2177]: I0913 00:16:24.820452 2177 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:16:24.830433 kubelet[2177]: I0913 00:16:24.820555 2177 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:16:24.830433 kubelet[2177]: W0913 00:16:24.821835 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 13 00:16:24.830433 kubelet[2177]: E0913 00:16:24.821914 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:24.832438 kubelet[2177]: E0913 00:16:24.832376 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="200ms" Sep 13 00:16:24.835045 kubelet[2177]: E0913 00:16:24.834985 2177 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:16:24.836880 kubelet[2177]: I0913 00:16:24.836164 2177 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:16:24.836880 kubelet[2177]: I0913 00:16:24.836188 2177 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:16:24.836880 kubelet[2177]: I0913 00:16:24.836305 2177 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:16:24.852527 kubelet[2177]: I0913 00:16:24.852127 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:16:24.853977 kubelet[2177]: I0913 00:16:24.853951 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:16:24.854319 kubelet[2177]: I0913 00:16:24.854004 2177 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:16:24.854319 kubelet[2177]: I0913 00:16:24.854046 2177 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:16:24.854319 kubelet[2177]: I0913 00:16:24.854061 2177 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:16:24.854319 kubelet[2177]: E0913 00:16:24.854130 2177 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:16:24.855188 kubelet[2177]: W0913 00:16:24.855153 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 13 00:16:24.855252 kubelet[2177]: E0913 00:16:24.855201 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:24.860076 kubelet[2177]: I0913 00:16:24.860046 2177 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:16:24.860205 kubelet[2177]: I0913 00:16:24.860188 2177 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:16:24.860324 kubelet[2177]: I0913 00:16:24.860286 2177 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:16:24.920666 kubelet[2177]: E0913 00:16:24.920581 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:24.955131 kubelet[2177]: E0913 00:16:24.955067 2177 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:16:25.019644 kubelet[2177]: I0913 00:16:25.019576 2177 policy_none.go:49] "None policy: Start" Sep 13 00:16:25.019644 kubelet[2177]: I0913 00:16:25.019630 2177 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:16:25.019644 kubelet[2177]: I0913 00:16:25.019649 2177 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:16:25.021350 kubelet[2177]: E0913 00:16:25.021295 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:25.033022 kubelet[2177]: E0913 00:16:25.032970 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="400ms" Sep 13 00:16:25.122405 kubelet[2177]: E0913 00:16:25.122323 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:25.129175 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:16:25.146028 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:16:25.150042 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:16:25.155347 kubelet[2177]: E0913 00:16:25.155277 2177 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:16:25.162776 kubelet[2177]: I0913 00:16:25.162711 2177 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:16:25.163180 kubelet[2177]: I0913 00:16:25.163149 2177 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:16:25.163257 kubelet[2177]: I0913 00:16:25.163184 2177 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:16:25.163828 kubelet[2177]: I0913 00:16:25.163606 2177 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:16:25.165274 kubelet[2177]: E0913 00:16:25.165186 2177 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:16:25.165274 kubelet[2177]: E0913 00:16:25.165270 2177 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:16:25.266362 kubelet[2177]: I0913 00:16:25.266304 2177 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:25.267006 kubelet[2177]: E0913 00:16:25.266956 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 13 00:16:25.434455 kubelet[2177]: E0913 00:16:25.434241 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="800ms" Sep 13 00:16:25.469870 kubelet[2177]: I0913 00:16:25.469773 2177 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:25.470755 kubelet[2177]: E0913 00:16:25.470676 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 13 00:16:25.570570 systemd[1]: Created slice kubepods-burstable-pode3c0fbdb00e5d8b305956984850bce99.slice - libcontainer container kubepods-burstable-pode3c0fbdb00e5d8b305956984850bce99.slice. Sep 13 00:16:25.594812 kubelet[2177]: W0913 00:16:25.594676 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 13 00:16:25.594812 kubelet[2177]: E0913 00:16:25.594818 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:25.601992 kubelet[2177]: E0913 00:16:25.601927 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:25.607671 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 13 00:16:25.620011 kubelet[2177]: E0913 00:16:25.619962 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:25.622407 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 13 00:16:25.624502 kubelet[2177]: E0913 00:16:25.624467 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:25.629996 kubelet[2177]: I0913 00:16:25.629925 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:25.629996 kubelet[2177]: I0913 00:16:25.629995 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:25.630145 kubelet[2177]: I0913 00:16:25.630029 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:25.630145 kubelet[2177]: I0913 00:16:25.630055 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:25.630145 kubelet[2177]: I0913 00:16:25.630100 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3c0fbdb00e5d8b305956984850bce99-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3c0fbdb00e5d8b305956984850bce99\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:25.630145 kubelet[2177]: I0913 00:16:25.630124 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3c0fbdb00e5d8b305956984850bce99-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3c0fbdb00e5d8b305956984850bce99\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:25.630328 kubelet[2177]: I0913 00:16:25.630258 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3c0fbdb00e5d8b305956984850bce99-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e3c0fbdb00e5d8b305956984850bce99\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:25.630372 kubelet[2177]: I0913 00:16:25.630359 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:25.630417 kubelet[2177]: I0913 00:16:25.630389 2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:25.839752 kubelet[2177]: W0913 00:16:25.839529 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 13 00:16:25.839752 kubelet[2177]: E0913 00:16:25.839628 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:25.872932 kubelet[2177]: I0913 00:16:25.872886 2177 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:25.873326 kubelet[2177]: E0913 00:16:25.873278 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 13 00:16:25.902763 kubelet[2177]: E0913 00:16:25.902683 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:25.903754 containerd[1450]: time="2025-09-13T00:16:25.903702973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e3c0fbdb00e5d8b305956984850bce99,Namespace:kube-system,Attempt:0,}" Sep 13 00:16:25.921254 kubelet[2177]: E0913 00:16:25.921177 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:25.922010 containerd[1450]: time="2025-09-13T00:16:25.921959355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 13 00:16:25.925508 kubelet[2177]: E0913 00:16:25.925464 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:25.926264 containerd[1450]: time="2025-09-13T00:16:25.926221762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 13 00:16:26.035796 kubelet[2177]: W0913 00:16:26.035754 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 13 00:16:26.035942 kubelet[2177]: E0913 00:16:26.035832 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:26.235419 kubelet[2177]: E0913 00:16:26.235341 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="1.6s" Sep 13 00:16:26.284215 kubelet[2177]: W0913 00:16:26.284094 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 13 00:16:26.284370 kubelet[2177]: E0913 00:16:26.284220 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:26.675834 kubelet[2177]: I0913 00:16:26.675775 2177 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:26.676447 kubelet[2177]: E0913 00:16:26.676271 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 13 00:16:26.773410 kubelet[2177]: E0913 00:16:26.773334 2177 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:27.357922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269102779.mount: Deactivated successfully. Sep 13 00:16:27.365688 containerd[1450]: time="2025-09-13T00:16:27.365594476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:16:27.367686 containerd[1450]: time="2025-09-13T00:16:27.367622589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:16:27.369050 containerd[1450]: time="2025-09-13T00:16:27.368996415Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:16:27.370167 containerd[1450]: time="2025-09-13T00:16:27.370087364Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:16:27.371479 containerd[1450]: time="2025-09-13T00:16:27.371417365Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:16:27.372069 containerd[1450]: time="2025-09-13T00:16:27.372025812Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:16:27.373090 containerd[1450]: time="2025-09-13T00:16:27.373055337Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:16:27.376051 containerd[1450]: time="2025-09-13T00:16:27.375987731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:16:27.378718 containerd[1450]: time="2025-09-13T00:16:27.378675093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.474849629s" Sep 13 00:16:27.379394 containerd[1450]: time="2025-09-13T00:16:27.379353622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.453039605s" Sep 13 00:16:27.380247 containerd[1450]: time="2025-09-13T00:16:27.380207975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.458148901s" Sep 13 00:16:27.581829 containerd[1450]: time="2025-09-13T00:16:27.579423896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:27.581829 containerd[1450]: time="2025-09-13T00:16:27.579519214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:27.581829 containerd[1450]: time="2025-09-13T00:16:27.579533956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:27.581829 containerd[1450]: time="2025-09-13T00:16:27.579698665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:27.582584 containerd[1450]: time="2025-09-13T00:16:27.581791319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:27.582584 containerd[1450]: time="2025-09-13T00:16:27.582550204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:27.582584 containerd[1450]: time="2025-09-13T00:16:27.582568373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:27.582700 containerd[1450]: time="2025-09-13T00:16:27.582657187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:27.592551 containerd[1450]: time="2025-09-13T00:16:27.592398360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:27.592701 containerd[1450]: time="2025-09-13T00:16:27.592550713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:27.592734 containerd[1450]: time="2025-09-13T00:16:27.592681118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:27.593507 containerd[1450]: time="2025-09-13T00:16:27.592895374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:27.730067 systemd[1]: Started cri-containerd-af962b544ff90896b8482c223f072148bca5d654d5500c676aaf08eb7a1be03f.scope - libcontainer container af962b544ff90896b8482c223f072148bca5d654d5500c676aaf08eb7a1be03f. Sep 13 00:16:27.738907 kubelet[2177]: W0913 00:16:27.737663 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 13 00:16:27.738907 kubelet[2177]: E0913 00:16:27.737724 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:27.754094 systemd[1]: Started cri-containerd-1b32d471bd7ebd9239e5af7b81514e3ea84a1c0504524585c73375b8b324a818.scope - libcontainer container 1b32d471bd7ebd9239e5af7b81514e3ea84a1c0504524585c73375b8b324a818. Sep 13 00:16:27.756455 systemd[1]: Started cri-containerd-5bf26910619a2d4a329fe534c0323b24e03c1f8a8c73f18b770561b2bcefc6f2.scope - libcontainer container 5bf26910619a2d4a329fe534c0323b24e03c1f8a8c73f18b770561b2bcefc6f2. Sep 13 00:16:27.836639 kubelet[2177]: E0913 00:16:27.836568 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="3.2s" Sep 13 00:16:27.841227 kubelet[2177]: W0913 00:16:27.841182 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 13 00:16:27.841368 kubelet[2177]: E0913 00:16:27.841278 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:27.849925 containerd[1450]: time="2025-09-13T00:16:27.849869385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e3c0fbdb00e5d8b305956984850bce99,Namespace:kube-system,Attempt:0,} returns sandbox id \"af962b544ff90896b8482c223f072148bca5d654d5500c676aaf08eb7a1be03f\"" Sep 13 00:16:27.855593 kubelet[2177]: E0913 00:16:27.855555 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:27.858143 containerd[1450]: time="2025-09-13T00:16:27.858110818Z" level=info msg="CreateContainer within sandbox \"af962b544ff90896b8482c223f072148bca5d654d5500c676aaf08eb7a1be03f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:16:27.866816 containerd[1450]: time="2025-09-13T00:16:27.866769100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b32d471bd7ebd9239e5af7b81514e3ea84a1c0504524585c73375b8b324a818\"" Sep 13 00:16:27.867504 kubelet[2177]: E0913 00:16:27.867480 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:27.869221 containerd[1450]: time="2025-09-13T00:16:27.869195030Z" level=info msg="CreateContainer within sandbox \"1b32d471bd7ebd9239e5af7b81514e3ea84a1c0504524585c73375b8b324a818\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:16:27.870559 containerd[1450]: time="2025-09-13T00:16:27.869670126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bf26910619a2d4a329fe534c0323b24e03c1f8a8c73f18b770561b2bcefc6f2\"" Sep 13 00:16:27.871137 kubelet[2177]: E0913 00:16:27.871108 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:27.873510 containerd[1450]: time="2025-09-13T00:16:27.873475154Z" level=info msg="CreateContainer within sandbox \"5bf26910619a2d4a329fe534c0323b24e03c1f8a8c73f18b770561b2bcefc6f2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:16:27.907542 kubelet[2177]: W0913 00:16:27.907484 2177 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 13 00:16:27.907542 kubelet[2177]: E0913 00:16:27.907545 2177 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:28.172245 containerd[1450]: time="2025-09-13T00:16:28.172164172Z" level=info msg="CreateContainer within sandbox \"af962b544ff90896b8482c223f072148bca5d654d5500c676aaf08eb7a1be03f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fe2b20fd04bfe73f9e5dfc20166179589c13c49c5ec1da2ec9d18a184f364c91\"" Sep 13 00:16:28.173308 containerd[1450]: time="2025-09-13T00:16:28.173272865Z" level=info msg="StartContainer for \"fe2b20fd04bfe73f9e5dfc20166179589c13c49c5ec1da2ec9d18a184f364c91\"" Sep 13 00:16:28.180499 containerd[1450]: time="2025-09-13T00:16:28.180306231Z" level=info msg="CreateContainer within sandbox \"1b32d471bd7ebd9239e5af7b81514e3ea84a1c0504524585c73375b8b324a818\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"15d1ac27f270f14d39abee8528fad465c0a292936e39d24632a7f02d80c8973d\"" Sep 13 00:16:28.181311 containerd[1450]: time="2025-09-13T00:16:28.181247210Z" level=info msg="StartContainer for \"15d1ac27f270f14d39abee8528fad465c0a292936e39d24632a7f02d80c8973d\"" Sep 13 00:16:28.189274 containerd[1450]: time="2025-09-13T00:16:28.189188575Z" level=info msg="CreateContainer within sandbox \"5bf26910619a2d4a329fe534c0323b24e03c1f8a8c73f18b770561b2bcefc6f2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b334d64711a0966a4fc820b3511380d2f8d4b1e8d793ae161cf8748a7dc507a9\"" Sep 13 00:16:28.191835 containerd[1450]: time="2025-09-13T00:16:28.190183863Z" level=info msg="StartContainer for \"b334d64711a0966a4fc820b3511380d2f8d4b1e8d793ae161cf8748a7dc507a9\"" Sep 13 00:16:28.217065 systemd[1]: Started cri-containerd-fe2b20fd04bfe73f9e5dfc20166179589c13c49c5ec1da2ec9d18a184f364c91.scope - libcontainer container fe2b20fd04bfe73f9e5dfc20166179589c13c49c5ec1da2ec9d18a184f364c91. Sep 13 00:16:28.236207 systemd[1]: Started cri-containerd-15d1ac27f270f14d39abee8528fad465c0a292936e39d24632a7f02d80c8973d.scope - libcontainer container 15d1ac27f270f14d39abee8528fad465c0a292936e39d24632a7f02d80c8973d. Sep 13 00:16:28.242391 systemd[1]: Started cri-containerd-b334d64711a0966a4fc820b3511380d2f8d4b1e8d793ae161cf8748a7dc507a9.scope - libcontainer container b334d64711a0966a4fc820b3511380d2f8d4b1e8d793ae161cf8748a7dc507a9. Sep 13 00:16:28.280847 kubelet[2177]: I0913 00:16:28.278428 2177 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:28.280847 kubelet[2177]: E0913 00:16:28.279288 2177 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 13 00:16:28.323143 containerd[1450]: time="2025-09-13T00:16:28.323059666Z" level=info msg="StartContainer for \"fe2b20fd04bfe73f9e5dfc20166179589c13c49c5ec1da2ec9d18a184f364c91\" returns successfully" Sep 13 00:16:28.323450 containerd[1450]: time="2025-09-13T00:16:28.323412741Z" level=info msg="StartContainer for \"b334d64711a0966a4fc820b3511380d2f8d4b1e8d793ae161cf8748a7dc507a9\" returns successfully" Sep 13 00:16:28.323538 containerd[1450]: time="2025-09-13T00:16:28.323468180Z" level=info msg="StartContainer for \"15d1ac27f270f14d39abee8528fad465c0a292936e39d24632a7f02d80c8973d\" returns successfully" Sep 13 00:16:28.925340 kubelet[2177]: E0913 00:16:28.925278 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:28.926052 kubelet[2177]: E0913 00:16:28.925534 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:28.931338 kubelet[2177]: E0913 00:16:28.931294 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:28.931474 kubelet[2177]: E0913 00:16:28.931456 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:28.934738 kubelet[2177]: E0913 00:16:28.934700 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:28.936825 kubelet[2177]: E0913 00:16:28.934906 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:29.937420 kubelet[2177]: E0913 00:16:29.936502 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:29.937420 kubelet[2177]: E0913 00:16:29.936679 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:29.937420 kubelet[2177]: E0913 00:16:29.937210 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:29.937420 kubelet[2177]: E0913 00:16:29.937327 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:30.413457 kubelet[2177]: E0913 00:16:30.413348 2177 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 13 00:16:30.775475 kubelet[2177]: E0913 00:16:30.775313 2177 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 13 00:16:30.939059 kubelet[2177]: E0913 00:16:30.939002 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:30.939566 kubelet[2177]: E0913 00:16:30.939204 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:31.041447 kubelet[2177]: E0913 00:16:31.041286 2177 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:16:31.209828 kubelet[2177]: E0913 00:16:31.209753 2177 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 13 00:16:31.481076 kubelet[2177]: I0913 00:16:31.481007 2177 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:31.496340 kubelet[2177]: I0913 00:16:31.496210 2177 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:16:31.496340 kubelet[2177]: E0913 00:16:31.496278 2177 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:16:31.510059 kubelet[2177]: E0913 00:16:31.510002 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:31.610992 kubelet[2177]: E0913 00:16:31.610916 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:31.712059 kubelet[2177]: E0913 00:16:31.711955 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:31.813115 kubelet[2177]: E0913 00:16:31.812913 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:31.913718 kubelet[2177]: E0913 00:16:31.913624 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:32.014497 kubelet[2177]: E0913 00:16:32.014433 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:32.115645 kubelet[2177]: E0913 00:16:32.115336 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:32.215613 kubelet[2177]: E0913 00:16:32.215538 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:32.316409 kubelet[2177]: E0913 00:16:32.316339 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:32.417566 kubelet[2177]: E0913 00:16:32.417460 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:32.518632 kubelet[2177]: E0913 00:16:32.518527 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:32.530941 kubelet[2177]: E0913 00:16:32.530889 2177 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:32.531132 kubelet[2177]: E0913 00:16:32.531109 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:32.619215 kubelet[2177]: E0913 00:16:32.619154 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:32.719597 kubelet[2177]: E0913 00:16:32.719425 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:32.820304 kubelet[2177]: E0913 00:16:32.820214 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:32.920957 kubelet[2177]: E0913 00:16:32.920871 2177 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:32.965872 kubelet[2177]: I0913 00:16:32.965790 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:32.993006 kubelet[2177]: E0913 00:16:32.992846 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:33.033147 kubelet[2177]: I0913 00:16:33.033083 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:33.038845 kubelet[2177]: I0913 00:16:33.038596 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:33.045032 kubelet[2177]: I0913 00:16:33.044826 2177 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:33.051212 kubelet[2177]: E0913 00:16:33.051146 2177 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:33.691108 systemd[1]: Reloading requested from client PID 2457 ('systemctl') (unit session-7.scope)... Sep 13 00:16:33.691132 systemd[1]: Reloading... Sep 13 00:16:33.807405 zram_generator::config[2496]: No configuration found. Sep 13 00:16:33.807652 kubelet[2177]: I0913 00:16:33.791389 2177 apiserver.go:52] "Watching apiserver" Sep 13 00:16:33.811690 kubelet[2177]: E0913 00:16:33.811603 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:33.811960 kubelet[2177]: E0913 00:16:33.811939 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:33.821271 kubelet[2177]: I0913 00:16:33.821215 2177 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:16:33.943969 kubelet[2177]: E0913 00:16:33.943757 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:33.978206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:16:34.090007 systemd[1]: Reloading finished in 398 ms. Sep 13 00:16:34.138840 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:34.158147 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:16:34.158537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:34.158612 systemd[1]: kubelet.service: Consumed 1.491s CPU time, 136.5M memory peak, 0B memory swap peak. Sep 13 00:16:34.174223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:34.402745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:34.408700 (kubelet)[2541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:16:34.459377 kubelet[2541]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:16:34.459377 kubelet[2541]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:16:34.459377 kubelet[2541]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:16:34.459960 kubelet[2541]: I0913 00:16:34.459501 2541 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:16:34.467316 kubelet[2541]: I0913 00:16:34.467251 2541 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:16:34.467316 kubelet[2541]: I0913 00:16:34.467283 2541 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:16:34.467569 kubelet[2541]: I0913 00:16:34.467542 2541 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:16:34.468814 kubelet[2541]: I0913 00:16:34.468775 2541 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:16:34.471397 kubelet[2541]: I0913 00:16:34.471350 2541 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:16:34.475491 kubelet[2541]: E0913 00:16:34.475447 2541 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:16:34.475491 kubelet[2541]: I0913 00:16:34.475481 2541 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:16:34.480922 kubelet[2541]: I0913 00:16:34.480882 2541 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:16:34.481189 kubelet[2541]: I0913 00:16:34.481153 2541 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:16:34.481374 kubelet[2541]: I0913 00:16:34.481179 2541 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:16:34.481374 kubelet[2541]: I0913 00:16:34.481373 2541 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:16:34.481517 kubelet[2541]: I0913 00:16:34.481382 2541 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:16:34.481517 kubelet[2541]: I0913 00:16:34.481439 2541 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:16:34.481631 kubelet[2541]: I0913 00:16:34.481610 2541 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:16:34.481672 kubelet[2541]: I0913 00:16:34.481641 2541 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:16:34.481672 kubelet[2541]: I0913 00:16:34.481662 2541 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:16:34.481672 kubelet[2541]: I0913 00:16:34.481672 2541 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:16:34.483132 kubelet[2541]: I0913 00:16:34.483099 2541 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:16:34.483626 kubelet[2541]: I0913 00:16:34.483592 2541 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:16:34.484277 kubelet[2541]: I0913 00:16:34.484156 2541 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:16:34.484277 kubelet[2541]: I0913 00:16:34.484194 2541 server.go:1287] "Started kubelet" Sep 13 00:16:34.484698 kubelet[2541]: I0913 00:16:34.484660 2541 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:16:34.485662 kubelet[2541]: I0913 00:16:34.485644 2541 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:16:34.489120 kubelet[2541]: I0913 00:16:34.489049 2541 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:16:34.489349 kubelet[2541]: I0913 00:16:34.489315 2541 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:16:34.494674 kubelet[2541]: I0913 00:16:34.494636 2541 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:16:34.495415 kubelet[2541]: I0913 00:16:34.495215 2541 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:16:34.495778 kubelet[2541]: I0913 00:16:34.495742 2541 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:16:34.495986 kubelet[2541]: I0913 00:16:34.495901 2541 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:16:34.496184 kubelet[2541]: I0913 00:16:34.496110 2541 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:16:34.498085 kubelet[2541]: E0913 00:16:34.498001 2541 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:16:34.499662 kubelet[2541]: I0913 00:16:34.499617 2541 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:16:34.499794 kubelet[2541]: I0913 00:16:34.499748 2541 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:16:34.502687 kubelet[2541]: I0913 00:16:34.502650 2541 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:16:34.516292 kubelet[2541]: I0913 00:16:34.516127 2541 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:16:34.518275 kubelet[2541]: I0913 00:16:34.518241 2541 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:16:34.518369 kubelet[2541]: I0913 00:16:34.518296 2541 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:16:34.518369 kubelet[2541]: I0913 00:16:34.518327 2541 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:16:34.518369 kubelet[2541]: I0913 00:16:34.518337 2541 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:16:34.518481 kubelet[2541]: E0913 00:16:34.518411 2541 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:16:34.545174 kubelet[2541]: I0913 00:16:34.545144 2541 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:16:34.545174 kubelet[2541]: I0913 00:16:34.545163 2541 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:16:34.545174 kubelet[2541]: I0913 00:16:34.545182 2541 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:16:34.545466 kubelet[2541]: I0913 00:16:34.545333 2541 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:16:34.545466 kubelet[2541]: I0913 00:16:34.545344 2541 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:16:34.545466 kubelet[2541]: I0913 00:16:34.545364 2541 policy_none.go:49] "None policy: Start" Sep 13 00:16:34.545466 kubelet[2541]: I0913 00:16:34.545373 2541 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:16:34.545466 kubelet[2541]: I0913 00:16:34.545384 2541 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:16:34.545620 kubelet[2541]: I0913 00:16:34.545515 2541 state_mem.go:75] "Updated machine memory state" Sep 13 00:16:34.550077 kubelet[2541]: I0913 00:16:34.550047 2541 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:16:34.550244 kubelet[2541]: I0913 00:16:34.550226 2541 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:16:34.550270 kubelet[2541]: I0913 00:16:34.550241 2541 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:16:34.550541 kubelet[2541]: I0913 00:16:34.550507 2541 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:16:34.552104 kubelet[2541]: E0913 00:16:34.552085 2541 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:16:34.620154 kubelet[2541]: I0913 00:16:34.620050 2541 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:34.620303 kubelet[2541]: I0913 00:16:34.620184 2541 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:34.620303 kubelet[2541]: I0913 00:16:34.620276 2541 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:34.657665 kubelet[2541]: I0913 00:16:34.657517 2541 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:34.692029 kubelet[2541]: E0913 00:16:34.691912 2541 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:34.692232 kubelet[2541]: E0913 00:16:34.692053 2541 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:34.693762 kubelet[2541]: E0913 00:16:34.693702 2541 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:34.697560 kubelet[2541]: I0913 00:16:34.696644 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3c0fbdb00e5d8b305956984850bce99-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3c0fbdb00e5d8b305956984850bce99\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:34.697560 kubelet[2541]: I0913 00:16:34.696717 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3c0fbdb00e5d8b305956984850bce99-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e3c0fbdb00e5d8b305956984850bce99\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:34.697560 kubelet[2541]: I0913 00:16:34.696754 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:34.697560 kubelet[2541]: I0913 00:16:34.696780 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:34.697560 kubelet[2541]: I0913 00:16:34.696827 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:34.697956 kubelet[2541]: I0913 00:16:34.696850 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3c0fbdb00e5d8b305956984850bce99-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3c0fbdb00e5d8b305956984850bce99\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:34.697956 kubelet[2541]: I0913 00:16:34.696874 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:34.697956 kubelet[2541]: I0913 00:16:34.696927 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:34.697956 kubelet[2541]: I0913 00:16:34.696952 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:34.698104 kubelet[2541]: I0913 00:16:34.698056 2541 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 00:16:34.699651 kubelet[2541]: I0913 00:16:34.698192 2541 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:16:34.993210 kubelet[2541]: E0913 00:16:34.992979 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:34.993210 kubelet[2541]: E0913 00:16:34.993018 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:34.994193 kubelet[2541]: E0913 00:16:34.994161 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:35.482835 kubelet[2541]: I0913 00:16:35.482751 2541 apiserver.go:52] "Watching apiserver" Sep 13 00:16:35.497010 kubelet[2541]: I0913 00:16:35.496940 2541 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:16:35.530407 kubelet[2541]: I0913 00:16:35.530032 2541 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:35.530407 kubelet[2541]: I0913 00:16:35.530072 2541 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:35.530407 kubelet[2541]: I0913 00:16:35.530179 2541 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:35.818388 kubelet[2541]: E0913 00:16:35.817610 2541 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:35.818388 kubelet[2541]: E0913 00:16:35.817913 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:35.818758 kubelet[2541]: I0913 00:16:35.818694 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.818648736 podStartE2EDuration="2.818648736s" podCreationTimestamp="2025-09-13 00:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:16:35.81863108 +0000 UTC m=+1.404233316" watchObservedRunningTime="2025-09-13 00:16:35.818648736 +0000 UTC m=+1.404250972" Sep 13 00:16:35.818880 kubelet[2541]: E0913 00:16:35.818776 2541 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:35.818947 kubelet[2541]: E0913 00:16:35.818925 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:35.819010 kubelet[2541]: E0913 00:16:35.818985 2541 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:35.819752 kubelet[2541]: E0913 00:16:35.819124 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:35.847023 kubelet[2541]: I0913 00:16:35.846958 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.846936631 podStartE2EDuration="3.846936631s" podCreationTimestamp="2025-09-13 00:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:16:35.831574601 +0000 UTC m=+1.417176837" watchObservedRunningTime="2025-09-13 00:16:35.846936631 +0000 UTC m=+1.432538868" Sep 13 00:16:35.859243 kubelet[2541]: I0913 00:16:35.859156 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.859130452 podStartE2EDuration="2.859130452s" podCreationTimestamp="2025-09-13 00:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:16:35.847399953 +0000 UTC m=+1.433002199" watchObservedRunningTime="2025-09-13 00:16:35.859130452 +0000 UTC m=+1.444732688" Sep 13 00:16:36.532049 kubelet[2541]: E0913 00:16:36.531645 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:36.532049 kubelet[2541]: E0913 00:16:36.531645 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:36.534162 kubelet[2541]: E0913 00:16:36.532389 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:39.445199 kubelet[2541]: I0913 00:16:39.445148 2541 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:16:39.445777 containerd[1450]: time="2025-09-13T00:16:39.445650674Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:16:39.446077 kubelet[2541]: I0913 00:16:39.445972 2541 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:16:39.824771 systemd[1]: Created slice kubepods-besteffort-pod5e0bc2a1_7c3d_4540_b813_4464a7401770.slice - libcontainer container kubepods-besteffort-pod5e0bc2a1_7c3d_4540_b813_4464a7401770.slice. Sep 13 00:16:39.826465 kubelet[2541]: I0913 00:16:39.826409 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e0bc2a1-7c3d-4540-b813-4464a7401770-kube-proxy\") pod \"kube-proxy-wfh2t\" (UID: \"5e0bc2a1-7c3d-4540-b813-4464a7401770\") " pod="kube-system/kube-proxy-wfh2t" Sep 13 00:16:39.826465 kubelet[2541]: I0913 00:16:39.826451 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e0bc2a1-7c3d-4540-b813-4464a7401770-xtables-lock\") pod \"kube-proxy-wfh2t\" (UID: \"5e0bc2a1-7c3d-4540-b813-4464a7401770\") " pod="kube-system/kube-proxy-wfh2t" Sep 13 00:16:39.826465 kubelet[2541]: I0913 00:16:39.826472 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e0bc2a1-7c3d-4540-b813-4464a7401770-lib-modules\") pod \"kube-proxy-wfh2t\" (UID: \"5e0bc2a1-7c3d-4540-b813-4464a7401770\") " pod="kube-system/kube-proxy-wfh2t" Sep 13 00:16:39.826727 kubelet[2541]: I0913 00:16:39.826492 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh6l2\" (UniqueName: \"kubernetes.io/projected/5e0bc2a1-7c3d-4540-b813-4464a7401770-kube-api-access-qh6l2\") pod \"kube-proxy-wfh2t\" (UID: \"5e0bc2a1-7c3d-4540-b813-4464a7401770\") " pod="kube-system/kube-proxy-wfh2t" Sep 13 00:16:39.933742 kubelet[2541]: E0913 00:16:39.933676 2541 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:16:39.933742 kubelet[2541]: E0913 00:16:39.933739 2541 projected.go:194] Error preparing data for projected volume kube-api-access-qh6l2 for pod kube-system/kube-proxy-wfh2t: configmap "kube-root-ca.crt" not found Sep 13 00:16:39.933991 kubelet[2541]: E0913 00:16:39.933861 2541 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e0bc2a1-7c3d-4540-b813-4464a7401770-kube-api-access-qh6l2 podName:5e0bc2a1-7c3d-4540-b813-4464a7401770 nodeName:}" failed. No retries permitted until 2025-09-13 00:16:40.433827198 +0000 UTC m=+6.019429505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qh6l2" (UniqueName: "kubernetes.io/projected/5e0bc2a1-7c3d-4540-b813-4464a7401770-kube-api-access-qh6l2") pod "kube-proxy-wfh2t" (UID: "5e0bc2a1-7c3d-4540-b813-4464a7401770") : configmap "kube-root-ca.crt" not found Sep 13 00:16:40.739393 kubelet[2541]: E0913 00:16:40.739321 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:40.740315 containerd[1450]: time="2025-09-13T00:16:40.740186752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wfh2t,Uid:5e0bc2a1-7c3d-4540-b813-4464a7401770,Namespace:kube-system,Attempt:0,}" Sep 13 00:16:41.212871 systemd[1]: Created slice kubepods-besteffort-pod70a2b352_63fb_4949_ad16_41b22b536773.slice - libcontainer container kubepods-besteffort-pod70a2b352_63fb_4949_ad16_41b22b536773.slice. Sep 13 00:16:41.221554 containerd[1450]: time="2025-09-13T00:16:41.221368406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:41.221554 containerd[1450]: time="2025-09-13T00:16:41.221499886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:41.221554 containerd[1450]: time="2025-09-13T00:16:41.221537012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:41.221994 containerd[1450]: time="2025-09-13T00:16:41.221749790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:41.236148 kubelet[2541]: I0913 00:16:41.236089 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/70a2b352-63fb-4949-ad16-41b22b536773-var-lib-calico\") pod \"tigera-operator-755d956888-28mgt\" (UID: \"70a2b352-63fb-4949-ad16-41b22b536773\") " pod="tigera-operator/tigera-operator-755d956888-28mgt" Sep 13 00:16:41.236452 kubelet[2541]: I0913 00:16:41.236364 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn4pc\" (UniqueName: \"kubernetes.io/projected/70a2b352-63fb-4949-ad16-41b22b536773-kube-api-access-zn4pc\") pod \"tigera-operator-755d956888-28mgt\" (UID: \"70a2b352-63fb-4949-ad16-41b22b536773\") " pod="tigera-operator/tigera-operator-755d956888-28mgt" Sep 13 00:16:41.257032 systemd[1]: Started cri-containerd-bc18a20f7fc665ff4480d46c6b28bfaa60209016efe16bdd63dea935434679e0.scope - libcontainer container bc18a20f7fc665ff4480d46c6b28bfaa60209016efe16bdd63dea935434679e0. Sep 13 00:16:41.286606 containerd[1450]: time="2025-09-13T00:16:41.286550744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wfh2t,Uid:5e0bc2a1-7c3d-4540-b813-4464a7401770,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc18a20f7fc665ff4480d46c6b28bfaa60209016efe16bdd63dea935434679e0\"" Sep 13 00:16:41.287852 kubelet[2541]: E0913 00:16:41.287789 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:41.290633 containerd[1450]: time="2025-09-13T00:16:41.290581711Z" level=info msg="CreateContainer within sandbox \"bc18a20f7fc665ff4480d46c6b28bfaa60209016efe16bdd63dea935434679e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:16:41.319310 containerd[1450]: time="2025-09-13T00:16:41.319216933Z" level=info msg="CreateContainer within sandbox \"bc18a20f7fc665ff4480d46c6b28bfaa60209016efe16bdd63dea935434679e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"44217f1010c346bfeaaf8b89fcc9fec984a2059e3bb0372e3c832ec4459beb8f\"" Sep 13 00:16:41.320223 containerd[1450]: time="2025-09-13T00:16:41.320179632Z" level=info msg="StartContainer for \"44217f1010c346bfeaaf8b89fcc9fec984a2059e3bb0372e3c832ec4459beb8f\"" Sep 13 00:16:41.362007 systemd[1]: Started cri-containerd-44217f1010c346bfeaaf8b89fcc9fec984a2059e3bb0372e3c832ec4459beb8f.scope - libcontainer container 44217f1010c346bfeaaf8b89fcc9fec984a2059e3bb0372e3c832ec4459beb8f. Sep 13 00:16:41.398988 containerd[1450]: time="2025-09-13T00:16:41.398923461Z" level=info msg="StartContainer for \"44217f1010c346bfeaaf8b89fcc9fec984a2059e3bb0372e3c832ec4459beb8f\" returns successfully" Sep 13 00:16:41.518642 containerd[1450]: time="2025-09-13T00:16:41.518485699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-28mgt,Uid:70a2b352-63fb-4949-ad16-41b22b536773,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:16:41.542254 kubelet[2541]: E0913 00:16:41.542215 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:41.551053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3326965842.mount: Deactivated successfully. Sep 13 00:16:41.566679 containerd[1450]: time="2025-09-13T00:16:41.566538471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:41.566679 containerd[1450]: time="2025-09-13T00:16:41.566621091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:41.566679 containerd[1450]: time="2025-09-13T00:16:41.566636142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:41.566999 containerd[1450]: time="2025-09-13T00:16:41.566745055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:41.592849 systemd[1]: Started cri-containerd-7bafc69d59eedef87236174dc3fc22bd87bb5efc98f0c5e39b405b904e437ab3.scope - libcontainer container 7bafc69d59eedef87236174dc3fc22bd87bb5efc98f0c5e39b405b904e437ab3. Sep 13 00:16:41.659430 containerd[1450]: time="2025-09-13T00:16:41.659304629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-28mgt,Uid:70a2b352-63fb-4949-ad16-41b22b536773,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7bafc69d59eedef87236174dc3fc22bd87bb5efc98f0c5e39b405b904e437ab3\"" Sep 13 00:16:41.661297 containerd[1450]: time="2025-09-13T00:16:41.661259668Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:16:43.296485 kubelet[2541]: E0913 00:16:43.296371 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:43.319521 kubelet[2541]: I0913 00:16:43.319420 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wfh2t" podStartSLOduration=4.319392085 podStartE2EDuration="4.319392085s" podCreationTimestamp="2025-09-13 00:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:16:41.773711688 +0000 UTC m=+7.359313934" watchObservedRunningTime="2025-09-13 00:16:43.319392085 +0000 UTC m=+8.904994321" Sep 13 00:16:43.547774 kubelet[2541]: E0913 00:16:43.547585 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:43.700839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2503115705.mount: Deactivated successfully. Sep 13 00:16:43.816757 kubelet[2541]: E0913 00:16:43.816580 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:44.550189 kubelet[2541]: E0913 00:16:44.550127 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:44.868520 containerd[1450]: time="2025-09-13T00:16:44.868432221Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:44.907383 containerd[1450]: time="2025-09-13T00:16:44.907293555Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:16:44.927455 containerd[1450]: time="2025-09-13T00:16:44.927362132Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:44.956244 containerd[1450]: time="2025-09-13T00:16:44.956151490Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:44.957274 containerd[1450]: time="2025-09-13T00:16:44.957219820Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.295925481s" Sep 13 00:16:44.957324 containerd[1450]: time="2025-09-13T00:16:44.957277818Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:16:44.959527 containerd[1450]: time="2025-09-13T00:16:44.959480864Z" level=info msg="CreateContainer within sandbox \"7bafc69d59eedef87236174dc3fc22bd87bb5efc98f0c5e39b405b904e437ab3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:16:45.026382 containerd[1450]: time="2025-09-13T00:16:45.026277265Z" level=info msg="CreateContainer within sandbox \"7bafc69d59eedef87236174dc3fc22bd87bb5efc98f0c5e39b405b904e437ab3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8b0de2b740ba7311ea92fcea8c4a45e850a8c04afe2466ba3eb3de85bac43c60\"" Sep 13 00:16:45.027132 containerd[1450]: time="2025-09-13T00:16:45.027071642Z" level=info msg="StartContainer for \"8b0de2b740ba7311ea92fcea8c4a45e850a8c04afe2466ba3eb3de85bac43c60\"" Sep 13 00:16:45.087105 systemd[1]: Started cri-containerd-8b0de2b740ba7311ea92fcea8c4a45e850a8c04afe2466ba3eb3de85bac43c60.scope - libcontainer container 8b0de2b740ba7311ea92fcea8c4a45e850a8c04afe2466ba3eb3de85bac43c60. Sep 13 00:16:45.144309 containerd[1450]: time="2025-09-13T00:16:45.144103001Z" level=info msg="StartContainer for \"8b0de2b740ba7311ea92fcea8c4a45e850a8c04afe2466ba3eb3de85bac43c60\" returns successfully" Sep 13 00:16:45.414456 kubelet[2541]: E0913 00:16:45.414278 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:45.571272 kubelet[2541]: I0913 00:16:45.571185 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-28mgt" podStartSLOduration=2.273511922 podStartE2EDuration="5.571158397s" podCreationTimestamp="2025-09-13 00:16:40 +0000 UTC" firstStartedPulling="2025-09-13 00:16:41.660482711 +0000 UTC m=+7.246084947" lastFinishedPulling="2025-09-13 00:16:44.958129186 +0000 UTC m=+10.543731422" observedRunningTime="2025-09-13 00:16:45.571058314 +0000 UTC m=+11.156660550" watchObservedRunningTime="2025-09-13 00:16:45.571158397 +0000 UTC m=+11.156760633" Sep 13 00:16:52.195355 sudo[1628]: pam_unix(sudo:session): session closed for user root Sep 13 00:16:52.204139 sshd[1625]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:52.214515 systemd-logind[1432]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:16:52.216781 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:34830.service: Deactivated successfully. Sep 13 00:16:52.221455 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:16:52.221871 systemd[1]: session-7.scope: Consumed 6.660s CPU time, 158.1M memory peak, 0B memory swap peak. Sep 13 00:16:52.226091 systemd-logind[1432]: Removed session 7. Sep 13 00:16:55.368324 systemd[1]: Created slice kubepods-besteffort-podaff503e6_4fc4_4a67_ac85_bbc8048d5f95.slice - libcontainer container kubepods-besteffort-podaff503e6_4fc4_4a67_ac85_bbc8048d5f95.slice. Sep 13 00:16:55.375767 kubelet[2541]: I0913 00:16:55.373816 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aff503e6-4fc4-4a67-ac85-bbc8048d5f95-typha-certs\") pod \"calico-typha-7c8ccd5f5f-h9k9d\" (UID: \"aff503e6-4fc4-4a67-ac85-bbc8048d5f95\") " pod="calico-system/calico-typha-7c8ccd5f5f-h9k9d" Sep 13 00:16:55.375767 kubelet[2541]: I0913 00:16:55.373892 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aff503e6-4fc4-4a67-ac85-bbc8048d5f95-tigera-ca-bundle\") pod \"calico-typha-7c8ccd5f5f-h9k9d\" (UID: \"aff503e6-4fc4-4a67-ac85-bbc8048d5f95\") " pod="calico-system/calico-typha-7c8ccd5f5f-h9k9d" Sep 13 00:16:55.375767 kubelet[2541]: I0913 00:16:55.373926 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mv72\" (UniqueName: \"kubernetes.io/projected/aff503e6-4fc4-4a67-ac85-bbc8048d5f95-kube-api-access-8mv72\") pod \"calico-typha-7c8ccd5f5f-h9k9d\" (UID: \"aff503e6-4fc4-4a67-ac85-bbc8048d5f95\") " pod="calico-system/calico-typha-7c8ccd5f5f-h9k9d" Sep 13 00:16:55.673714 kubelet[2541]: E0913 00:16:55.673555 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:55.674469 containerd[1450]: time="2025-09-13T00:16:55.674425566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c8ccd5f5f-h9k9d,Uid:aff503e6-4fc4-4a67-ac85-bbc8048d5f95,Namespace:calico-system,Attempt:0,}" Sep 13 00:16:55.836642 containerd[1450]: time="2025-09-13T00:16:55.835707823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:55.836642 containerd[1450]: time="2025-09-13T00:16:55.835870760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:55.836642 containerd[1450]: time="2025-09-13T00:16:55.835899748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:55.836642 containerd[1450]: time="2025-09-13T00:16:55.836067615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:55.872338 systemd[1]: Started cri-containerd-32af97d0c22004e4ca848a430bd91997456dbee117fbb7d5e9df6744c9ccdbc0.scope - libcontainer container 32af97d0c22004e4ca848a430bd91997456dbee117fbb7d5e9df6744c9ccdbc0. Sep 13 00:16:55.874767 systemd[1]: Created slice kubepods-besteffort-podacf4d800_08e5_4f36_a724_125b155a82e6.slice - libcontainer container kubepods-besteffort-podacf4d800_08e5_4f36_a724_125b155a82e6.slice. Sep 13 00:16:55.875861 kubelet[2541]: I0913 00:16:55.875794 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/acf4d800-08e5-4f36-a724-125b155a82e6-flexvol-driver-host\") pod \"calico-node-zpcg7\" (UID: \"acf4d800-08e5-4f36-a724-125b155a82e6\") " pod="calico-system/calico-node-zpcg7" Sep 13 00:16:55.875955 kubelet[2541]: I0913 00:16:55.875865 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/acf4d800-08e5-4f36-a724-125b155a82e6-node-certs\") pod \"calico-node-zpcg7\" (UID: \"acf4d800-08e5-4f36-a724-125b155a82e6\") " pod="calico-system/calico-node-zpcg7" Sep 13 00:16:55.875955 kubelet[2541]: I0913 00:16:55.875889 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/acf4d800-08e5-4f36-a724-125b155a82e6-cni-log-dir\") pod \"calico-node-zpcg7\" (UID: \"acf4d800-08e5-4f36-a724-125b155a82e6\") " pod="calico-system/calico-node-zpcg7" Sep 13 00:16:55.875955 kubelet[2541]: I0913 00:16:55.875903 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acf4d800-08e5-4f36-a724-125b155a82e6-tigera-ca-bundle\") pod \"calico-node-zpcg7\" (UID: \"acf4d800-08e5-4f36-a724-125b155a82e6\") " pod="calico-system/calico-node-zpcg7" Sep 13 00:16:55.875955 kubelet[2541]: I0913 00:16:55.875926 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acf4d800-08e5-4f36-a724-125b155a82e6-lib-modules\") pod \"calico-node-zpcg7\" (UID: \"acf4d800-08e5-4f36-a724-125b155a82e6\") " pod="calico-system/calico-node-zpcg7" Sep 13 00:16:55.875955 kubelet[2541]: I0913 00:16:55.875941 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqgwn\" (UniqueName: \"kubernetes.io/projected/acf4d800-08e5-4f36-a724-125b155a82e6-kube-api-access-nqgwn\") pod \"calico-node-zpcg7\" (UID: \"acf4d800-08e5-4f36-a724-125b155a82e6\") " pod="calico-system/calico-node-zpcg7" Sep 13 00:16:55.876116 kubelet[2541]: I0913 00:16:55.875956 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/acf4d800-08e5-4f36-a724-125b155a82e6-cni-net-dir\") pod \"calico-node-zpcg7\" (UID: \"acf4d800-08e5-4f36-a724-125b155a82e6\") " pod="calico-system/calico-node-zpcg7" Sep 13 00:16:55.876116 kubelet[2541]: I0913 00:16:55.875970 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/acf4d800-08e5-4f36-a724-125b155a82e6-var-lib-calico\") pod \"calico-node-zpcg7\" (UID: \"acf4d800-08e5-4f36-a724-125b155a82e6\") " pod="calico-system/calico-node-zpcg7" Sep 13 00:16:55.876116 kubelet[2541]: I0913 00:16:55.875995 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/acf4d800-08e5-4f36-a724-125b155a82e6-policysync\") pod \"calico-node-zpcg7\" (UID: \"acf4d800-08e5-4f36-a724-125b155a82e6\") " pod="calico-system/calico-node-zpcg7" Sep 13 00:16:55.876116 kubelet[2541]: I0913 00:16:55.876010 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/acf4d800-08e5-4f36-a724-125b155a82e6-cni-bin-dir\") pod \"calico-node-zpcg7\" (UID: \"acf4d800-08e5-4f36-a724-125b155a82e6\") " pod="calico-system/calico-node-zpcg7" Sep 13 00:16:55.876116 kubelet[2541]: I0913 00:16:55.876032 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acf4d800-08e5-4f36-a724-125b155a82e6-xtables-lock\") pod \"calico-node-zpcg7\" (UID: \"acf4d800-08e5-4f36-a724-125b155a82e6\") " pod="calico-system/calico-node-zpcg7" Sep 13 00:16:55.876391 kubelet[2541]: I0913 00:16:55.876049 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/acf4d800-08e5-4f36-a724-125b155a82e6-var-run-calico\") pod \"calico-node-zpcg7\" (UID: \"acf4d800-08e5-4f36-a724-125b155a82e6\") " pod="calico-system/calico-node-zpcg7" Sep 13 00:16:55.927200 containerd[1450]: time="2025-09-13T00:16:55.926998477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c8ccd5f5f-h9k9d,Uid:aff503e6-4fc4-4a67-ac85-bbc8048d5f95,Namespace:calico-system,Attempt:0,} returns sandbox id \"32af97d0c22004e4ca848a430bd91997456dbee117fbb7d5e9df6744c9ccdbc0\"" Sep 13 00:16:55.928386 kubelet[2541]: E0913 00:16:55.928352 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:55.929502 containerd[1450]: time="2025-09-13T00:16:55.929470535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:16:55.983413 kubelet[2541]: E0913 00:16:55.983357 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:55.983413 kubelet[2541]: W0913 00:16:55.983398 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:55.983617 kubelet[2541]: E0913 00:16:55.983470 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:55.990970 kubelet[2541]: E0913 00:16:55.990900 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:55.990970 kubelet[2541]: W0913 00:16:55.990954 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:55.991182 kubelet[2541]: E0913 00:16:55.990995 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.078389 kubelet[2541]: E0913 00:16:56.078322 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k54nv" podUID="5a380725-fcdb-4d95-86f1-35ff10380123" Sep 13 00:16:56.080117 kubelet[2541]: E0913 00:16:56.080087 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.080226 kubelet[2541]: W0913 00:16:56.080112 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.080226 kubelet[2541]: E0913 00:16:56.080166 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.081710 kubelet[2541]: E0913 00:16:56.081679 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.081710 kubelet[2541]: W0913 00:16:56.081697 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.081710 kubelet[2541]: E0913 00:16:56.081710 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.082368 kubelet[2541]: E0913 00:16:56.082009 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.082368 kubelet[2541]: W0913 00:16:56.082038 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.082368 kubelet[2541]: E0913 00:16:56.082052 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.082617 kubelet[2541]: E0913 00:16:56.082558 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.082617 kubelet[2541]: W0913 00:16:56.082593 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.082703 kubelet[2541]: E0913 00:16:56.082625 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.083566 kubelet[2541]: E0913 00:16:56.083535 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.083566 kubelet[2541]: W0913 00:16:56.083556 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.083566 kubelet[2541]: E0913 00:16:56.083570 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.083916 kubelet[2541]: E0913 00:16:56.083881 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.084090 kubelet[2541]: W0913 00:16:56.083902 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.084090 kubelet[2541]: E0913 00:16:56.083969 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.084614 kubelet[2541]: E0913 00:16:56.084586 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.084614 kubelet[2541]: W0913 00:16:56.084608 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.084730 kubelet[2541]: E0913 00:16:56.084626 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.086834 kubelet[2541]: E0913 00:16:56.085143 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.086834 kubelet[2541]: W0913 00:16:56.085161 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.086834 kubelet[2541]: E0913 00:16:56.085175 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.086834 kubelet[2541]: E0913 00:16:56.085475 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.086834 kubelet[2541]: W0913 00:16:56.085486 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.086834 kubelet[2541]: E0913 00:16:56.085498 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.086834 kubelet[2541]: E0913 00:16:56.086294 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.086834 kubelet[2541]: W0913 00:16:56.086304 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.086834 kubelet[2541]: E0913 00:16:56.086315 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.086834 kubelet[2541]: E0913 00:16:56.086628 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.087197 kubelet[2541]: W0913 00:16:56.086637 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.087197 kubelet[2541]: E0913 00:16:56.086647 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.089721 kubelet[2541]: E0913 00:16:56.089680 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.089721 kubelet[2541]: W0913 00:16:56.089699 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.089721 kubelet[2541]: E0913 00:16:56.089714 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.090155 kubelet[2541]: E0913 00:16:56.090114 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.090155 kubelet[2541]: W0913 00:16:56.090140 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.090384 kubelet[2541]: E0913 00:16:56.090174 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.090534 kubelet[2541]: E0913 00:16:56.090506 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.090534 kubelet[2541]: W0913 00:16:56.090518 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.090534 kubelet[2541]: E0913 00:16:56.090528 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.090971 kubelet[2541]: E0913 00:16:56.090922 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.090971 kubelet[2541]: W0913 00:16:56.090938 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.090971 kubelet[2541]: E0913 00:16:56.090957 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.091250 kubelet[2541]: E0913 00:16:56.091240 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.091291 kubelet[2541]: W0913 00:16:56.091254 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.091291 kubelet[2541]: E0913 00:16:56.091269 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.091556 kubelet[2541]: E0913 00:16:56.091537 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.091617 kubelet[2541]: W0913 00:16:56.091548 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.091617 kubelet[2541]: E0913 00:16:56.091590 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.092722 kubelet[2541]: E0913 00:16:56.092690 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.092722 kubelet[2541]: W0913 00:16:56.092712 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.092722 kubelet[2541]: E0913 00:16:56.092724 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.094266 kubelet[2541]: E0913 00:16:56.094229 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.094622 kubelet[2541]: W0913 00:16:56.094378 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.094622 kubelet[2541]: E0913 00:16:56.094444 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.095662 kubelet[2541]: E0913 00:16:56.095460 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.095662 kubelet[2541]: W0913 00:16:56.095477 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.095662 kubelet[2541]: E0913 00:16:56.095492 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.181355 kubelet[2541]: E0913 00:16:56.181168 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.181355 kubelet[2541]: W0913 00:16:56.181199 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.181355 kubelet[2541]: E0913 00:16:56.181226 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.181355 kubelet[2541]: I0913 00:16:56.181260 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5a380725-fcdb-4d95-86f1-35ff10380123-socket-dir\") pod \"csi-node-driver-k54nv\" (UID: \"5a380725-fcdb-4d95-86f1-35ff10380123\") " pod="calico-system/csi-node-driver-k54nv" Sep 13 00:16:56.181722 containerd[1450]: time="2025-09-13T00:16:56.181654308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zpcg7,Uid:acf4d800-08e5-4f36-a724-125b155a82e6,Namespace:calico-system,Attempt:0,}" Sep 13 00:16:56.182187 kubelet[2541]: E0913 00:16:56.182154 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.182187 kubelet[2541]: W0913 00:16:56.182176 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.182318 kubelet[2541]: E0913 00:16:56.182225 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.182318 kubelet[2541]: I0913 00:16:56.182244 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a380725-fcdb-4d95-86f1-35ff10380123-kubelet-dir\") pod \"csi-node-driver-k54nv\" (UID: \"5a380725-fcdb-4d95-86f1-35ff10380123\") " pod="calico-system/csi-node-driver-k54nv" Sep 13 00:16:56.182611 kubelet[2541]: E0913 00:16:56.182567 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.182611 kubelet[2541]: W0913 00:16:56.182586 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.182611 kubelet[2541]: E0913 00:16:56.182605 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.183693 kubelet[2541]: I0913 00:16:56.182621 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xczwl\" (UniqueName: \"kubernetes.io/projected/5a380725-fcdb-4d95-86f1-35ff10380123-kube-api-access-xczwl\") pod \"csi-node-driver-k54nv\" (UID: \"5a380725-fcdb-4d95-86f1-35ff10380123\") " pod="calico-system/csi-node-driver-k54nv" Sep 13 00:16:56.183693 kubelet[2541]: E0913 00:16:56.183104 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.183693 kubelet[2541]: W0913 00:16:56.183116 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.183693 kubelet[2541]: E0913 00:16:56.183139 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.183693 kubelet[2541]: I0913 00:16:56.183684 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5a380725-fcdb-4d95-86f1-35ff10380123-registration-dir\") pod \"csi-node-driver-k54nv\" (UID: \"5a380725-fcdb-4d95-86f1-35ff10380123\") " pod="calico-system/csi-node-driver-k54nv" Sep 13 00:16:56.184443 kubelet[2541]: E0913 00:16:56.184149 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.184443 kubelet[2541]: W0913 00:16:56.184165 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.184443 kubelet[2541]: E0913 00:16:56.184189 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.184995 kubelet[2541]: E0913 00:16:56.184661 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.184995 kubelet[2541]: W0913 00:16:56.184682 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.184995 kubelet[2541]: E0913 00:16:56.184825 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.186970 kubelet[2541]: E0913 00:16:56.185306 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.186970 kubelet[2541]: W0913 00:16:56.185320 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.186970 kubelet[2541]: E0913 00:16:56.185423 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.186970 kubelet[2541]: E0913 00:16:56.185914 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.186970 kubelet[2541]: W0913 00:16:56.185927 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.186970 kubelet[2541]: E0913 00:16:56.186237 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.186970 kubelet[2541]: E0913 00:16:56.186364 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.186970 kubelet[2541]: W0913 00:16:56.186375 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.186970 kubelet[2541]: E0913 00:16:56.186547 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.187393 kubelet[2541]: I0913 00:16:56.186570 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5a380725-fcdb-4d95-86f1-35ff10380123-varrun\") pod \"csi-node-driver-k54nv\" (UID: \"5a380725-fcdb-4d95-86f1-35ff10380123\") " pod="calico-system/csi-node-driver-k54nv" Sep 13 00:16:56.187393 kubelet[2541]: E0913 00:16:56.187122 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.187393 kubelet[2541]: W0913 00:16:56.187136 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.187393 kubelet[2541]: E0913 00:16:56.187373 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.188438 kubelet[2541]: E0913 00:16:56.188206 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.188438 kubelet[2541]: W0913 00:16:56.188219 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.188438 kubelet[2541]: E0913 00:16:56.188233 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.189914 kubelet[2541]: E0913 00:16:56.188632 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.189914 kubelet[2541]: W0913 00:16:56.188644 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.189914 kubelet[2541]: E0913 00:16:56.188674 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.189914 kubelet[2541]: E0913 00:16:56.189129 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.189914 kubelet[2541]: W0913 00:16:56.189156 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.189914 kubelet[2541]: E0913 00:16:56.189169 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.189914 kubelet[2541]: E0913 00:16:56.189513 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.189914 kubelet[2541]: W0913 00:16:56.189522 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.189914 kubelet[2541]: E0913 00:16:56.189533 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.189914 kubelet[2541]: E0913 00:16:56.189854 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.190515 kubelet[2541]: W0913 00:16:56.189865 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.190515 kubelet[2541]: E0913 00:16:56.189875 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.222349 containerd[1450]: time="2025-09-13T00:16:56.222222612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:56.222349 containerd[1450]: time="2025-09-13T00:16:56.222287221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:56.222349 containerd[1450]: time="2025-09-13T00:16:56.222299144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:56.222618 containerd[1450]: time="2025-09-13T00:16:56.222413374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:56.246224 systemd[1]: Started cri-containerd-9b720162cc476f44b1596c691045cea32cdb307e35482a1c2f7bd96c62b4e60c.scope - libcontainer container 9b720162cc476f44b1596c691045cea32cdb307e35482a1c2f7bd96c62b4e60c. Sep 13 00:16:56.285470 containerd[1450]: time="2025-09-13T00:16:56.285392901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zpcg7,Uid:acf4d800-08e5-4f36-a724-125b155a82e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b720162cc476f44b1596c691045cea32cdb307e35482a1c2f7bd96c62b4e60c\"" Sep 13 00:16:56.291287 kubelet[2541]: E0913 00:16:56.291237 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.291287 kubelet[2541]: W0913 00:16:56.291264 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.291287 kubelet[2541]: E0913 00:16:56.291290 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.291693 kubelet[2541]: E0913 00:16:56.291669 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.291693 kubelet[2541]: W0913 00:16:56.291688 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.291974 kubelet[2541]: E0913 00:16:56.291706 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.292288 kubelet[2541]: E0913 00:16:56.292257 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.292354 kubelet[2541]: W0913 00:16:56.292286 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.292354 kubelet[2541]: E0913 00:16:56.292322 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.292635 kubelet[2541]: E0913 00:16:56.292603 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.292635 kubelet[2541]: W0913 00:16:56.292620 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.292722 kubelet[2541]: E0913 00:16:56.292638 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.292974 kubelet[2541]: E0913 00:16:56.292945 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.292974 kubelet[2541]: W0913 00:16:56.292968 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.293047 kubelet[2541]: E0913 00:16:56.292988 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.293277 kubelet[2541]: E0913 00:16:56.293251 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.293321 kubelet[2541]: W0913 00:16:56.293283 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.293321 kubelet[2541]: E0913 00:16:56.293302 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.293623 kubelet[2541]: E0913 00:16:56.293564 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.293623 kubelet[2541]: W0913 00:16:56.293619 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.293728 kubelet[2541]: E0913 00:16:56.293710 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.294137 kubelet[2541]: E0913 00:16:56.294114 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.294192 kubelet[2541]: W0913 00:16:56.294141 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.294244 kubelet[2541]: E0913 00:16:56.294225 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.294446 kubelet[2541]: E0913 00:16:56.294415 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.294446 kubelet[2541]: W0913 00:16:56.294430 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.294446 kubelet[2541]: E0913 00:16:56.294446 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.294705 kubelet[2541]: E0913 00:16:56.294686 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.294705 kubelet[2541]: W0913 00:16:56.294698 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.294792 kubelet[2541]: E0913 00:16:56.294711 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.294990 kubelet[2541]: E0913 00:16:56.294958 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.294990 kubelet[2541]: W0913 00:16:56.294985 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.295077 kubelet[2541]: E0913 00:16:56.295003 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.295255 kubelet[2541]: E0913 00:16:56.295235 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.295255 kubelet[2541]: W0913 00:16:56.295247 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.295338 kubelet[2541]: E0913 00:16:56.295277 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.295445 kubelet[2541]: E0913 00:16:56.295427 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.295445 kubelet[2541]: W0913 00:16:56.295437 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.295513 kubelet[2541]: E0913 00:16:56.295461 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.297014 kubelet[2541]: E0913 00:16:56.295647 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.297014 kubelet[2541]: W0913 00:16:56.295659 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.297014 kubelet[2541]: E0913 00:16:56.295687 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.297014 kubelet[2541]: E0913 00:16:56.295887 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.297014 kubelet[2541]: W0913 00:16:56.295895 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.297014 kubelet[2541]: E0913 00:16:56.295922 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.297014 kubelet[2541]: E0913 00:16:56.296092 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.297014 kubelet[2541]: W0913 00:16:56.296101 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.297014 kubelet[2541]: E0913 00:16:56.296113 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.297014 kubelet[2541]: E0913 00:16:56.296331 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.297392 kubelet[2541]: W0913 00:16:56.296340 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.297392 kubelet[2541]: E0913 00:16:56.296354 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.297392 kubelet[2541]: E0913 00:16:56.296634 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.297392 kubelet[2541]: W0913 00:16:56.296642 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.297392 kubelet[2541]: E0913 00:16:56.296655 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.297392 kubelet[2541]: E0913 00:16:56.296922 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.297392 kubelet[2541]: W0913 00:16:56.296931 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.297392 kubelet[2541]: E0913 00:16:56.296957 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.297392 kubelet[2541]: E0913 00:16:56.297170 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.297392 kubelet[2541]: W0913 00:16:56.297178 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.297707 kubelet[2541]: E0913 00:16:56.297201 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.297707 kubelet[2541]: E0913 00:16:56.297426 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.297707 kubelet[2541]: W0913 00:16:56.297437 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.297707 kubelet[2541]: E0913 00:16:56.297452 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.297857 kubelet[2541]: E0913 00:16:56.297733 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.297857 kubelet[2541]: W0913 00:16:56.297743 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.297857 kubelet[2541]: E0913 00:16:56.297757 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.298015 kubelet[2541]: E0913 00:16:56.297995 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.298015 kubelet[2541]: W0913 00:16:56.298006 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.298015 kubelet[2541]: E0913 00:16:56.298017 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.298282 kubelet[2541]: E0913 00:16:56.298264 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.298282 kubelet[2541]: W0913 00:16:56.298274 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.298282 kubelet[2541]: E0913 00:16:56.298283 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.322164 kubelet[2541]: E0913 00:16:56.322119 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.322164 kubelet[2541]: W0913 00:16:56.322140 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.322164 kubelet[2541]: E0913 00:16:56.322162 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:56.372898 kubelet[2541]: E0913 00:16:56.372824 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:56.372898 kubelet[2541]: W0913 00:16:56.372873 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:56.373090 kubelet[2541]: E0913 00:16:56.372903 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:57.519129 kubelet[2541]: E0913 00:16:57.519040 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k54nv" podUID="5a380725-fcdb-4d95-86f1-35ff10380123" Sep 13 00:16:57.775358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount119545036.mount: Deactivated successfully. Sep 13 00:16:58.133229 containerd[1450]: time="2025-09-13T00:16:58.133127413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:58.134217 containerd[1450]: time="2025-09-13T00:16:58.134170187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:16:58.135520 containerd[1450]: time="2025-09-13T00:16:58.135489395Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:58.139064 containerd[1450]: time="2025-09-13T00:16:58.139005895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:58.140716 containerd[1450]: time="2025-09-13T00:16:58.140670813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.211165099s" Sep 13 00:16:58.140716 containerd[1450]: time="2025-09-13T00:16:58.140701916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:16:58.148911 containerd[1450]: time="2025-09-13T00:16:58.148796369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:16:58.175247 containerd[1450]: time="2025-09-13T00:16:58.175184814Z" level=info msg="CreateContainer within sandbox \"32af97d0c22004e4ca848a430bd91997456dbee117fbb7d5e9df6744c9ccdbc0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:16:58.189383 containerd[1450]: time="2025-09-13T00:16:58.189306858Z" level=info msg="CreateContainer within sandbox \"32af97d0c22004e4ca848a430bd91997456dbee117fbb7d5e9df6744c9ccdbc0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d1c6c5349e9beafad938ba0985dc2a5e7e980659b0089477e34701b71b6dab1a\"" Sep 13 00:16:58.190169 containerd[1450]: time="2025-09-13T00:16:58.190061887Z" level=info msg="StartContainer for \"d1c6c5349e9beafad938ba0985dc2a5e7e980659b0089477e34701b71b6dab1a\"" Sep 13 00:16:58.228013 systemd[1]: Started cri-containerd-d1c6c5349e9beafad938ba0985dc2a5e7e980659b0089477e34701b71b6dab1a.scope - libcontainer container d1c6c5349e9beafad938ba0985dc2a5e7e980659b0089477e34701b71b6dab1a. Sep 13 00:16:58.280913 containerd[1450]: time="2025-09-13T00:16:58.280833810Z" level=info msg="StartContainer for \"d1c6c5349e9beafad938ba0985dc2a5e7e980659b0089477e34701b71b6dab1a\" returns successfully" Sep 13 00:16:58.587670 kubelet[2541]: E0913 00:16:58.587510 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:58.614891 kubelet[2541]: E0913 00:16:58.613388 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.614891 kubelet[2541]: W0913 00:16:58.613429 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.615737 kubelet[2541]: E0913 00:16:58.615570 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.616051 kubelet[2541]: E0913 00:16:58.616033 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.616131 kubelet[2541]: W0913 00:16:58.616115 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.618837 kubelet[2541]: E0913 00:16:58.616216 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.619364 kubelet[2541]: E0913 00:16:58.619180 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.619364 kubelet[2541]: W0913 00:16:58.619214 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.619364 kubelet[2541]: E0913 00:16:58.619238 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.619679 kubelet[2541]: E0913 00:16:58.619537 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.619679 kubelet[2541]: W0913 00:16:58.619549 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.619679 kubelet[2541]: E0913 00:16:58.619560 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.620017 kubelet[2541]: E0913 00:16:58.620004 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.620139 kubelet[2541]: W0913 00:16:58.620082 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.620139 kubelet[2541]: E0913 00:16:58.620097 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.620635 kubelet[2541]: E0913 00:16:58.620497 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.620635 kubelet[2541]: W0913 00:16:58.620513 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.620635 kubelet[2541]: E0913 00:16:58.620531 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.620887 kubelet[2541]: E0913 00:16:58.620866 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.621002 kubelet[2541]: W0913 00:16:58.620978 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.621130 kubelet[2541]: E0913 00:16:58.621078 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.621507 kubelet[2541]: E0913 00:16:58.621422 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.621507 kubelet[2541]: W0913 00:16:58.621435 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.621507 kubelet[2541]: E0913 00:16:58.621446 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.621900 kubelet[2541]: E0913 00:16:58.621884 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.622021 kubelet[2541]: W0913 00:16:58.621965 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.622021 kubelet[2541]: E0913 00:16:58.621980 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.622518 kubelet[2541]: E0913 00:16:58.622316 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.622518 kubelet[2541]: W0913 00:16:58.622328 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.622518 kubelet[2541]: E0913 00:16:58.622341 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.624978 kubelet[2541]: E0913 00:16:58.624947 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.625237 kubelet[2541]: W0913 00:16:58.625111 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.625237 kubelet[2541]: E0913 00:16:58.625139 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.625683 kubelet[2541]: E0913 00:16:58.625670 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.625867 kubelet[2541]: W0913 00:16:58.625746 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.625867 kubelet[2541]: E0913 00:16:58.625772 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.626282 kubelet[2541]: E0913 00:16:58.626268 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.626361 kubelet[2541]: W0913 00:16:58.626349 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.626435 kubelet[2541]: E0913 00:16:58.626418 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.626932 kubelet[2541]: E0913 00:16:58.626841 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.626932 kubelet[2541]: W0913 00:16:58.626858 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.626932 kubelet[2541]: E0913 00:16:58.626872 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.631132 kubelet[2541]: E0913 00:16:58.631092 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.631420 kubelet[2541]: W0913 00:16:58.631306 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.631420 kubelet[2541]: E0913 00:16:58.631339 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.710468 kubelet[2541]: E0913 00:16:58.710427 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.710468 kubelet[2541]: W0913 00:16:58.710454 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.710468 kubelet[2541]: E0913 00:16:58.710480 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.711014 kubelet[2541]: E0913 00:16:58.710875 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.711014 kubelet[2541]: W0913 00:16:58.710886 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.711014 kubelet[2541]: E0913 00:16:58.710916 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.711277 kubelet[2541]: E0913 00:16:58.711247 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.711277 kubelet[2541]: W0913 00:16:58.711260 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.711277 kubelet[2541]: E0913 00:16:58.711275 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.711754 kubelet[2541]: E0913 00:16:58.711703 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.711754 kubelet[2541]: W0913 00:16:58.711744 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.711881 kubelet[2541]: E0913 00:16:58.711787 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.712064 kubelet[2541]: E0913 00:16:58.712034 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.712064 kubelet[2541]: W0913 00:16:58.712047 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.712121 kubelet[2541]: E0913 00:16:58.712064 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.712481 kubelet[2541]: E0913 00:16:58.712455 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.712481 kubelet[2541]: W0913 00:16:58.712475 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.712565 kubelet[2541]: E0913 00:16:58.712501 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.712957 kubelet[2541]: E0913 00:16:58.712933 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.712957 kubelet[2541]: W0913 00:16:58.712950 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.713054 kubelet[2541]: E0913 00:16:58.712972 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.713278 kubelet[2541]: E0913 00:16:58.713255 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.713278 kubelet[2541]: W0913 00:16:58.713270 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.713370 kubelet[2541]: E0913 00:16:58.713307 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.713562 kubelet[2541]: E0913 00:16:58.713541 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.713562 kubelet[2541]: W0913 00:16:58.713563 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.713634 kubelet[2541]: E0913 00:16:58.713617 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.713918 kubelet[2541]: E0913 00:16:58.713893 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.713918 kubelet[2541]: W0913 00:16:58.713909 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.714015 kubelet[2541]: E0913 00:16:58.713941 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.714235 kubelet[2541]: E0913 00:16:58.714214 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.714235 kubelet[2541]: W0913 00:16:58.714230 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.714312 kubelet[2541]: E0913 00:16:58.714250 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.714576 kubelet[2541]: E0913 00:16:58.714552 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.714576 kubelet[2541]: W0913 00:16:58.714568 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.714765 kubelet[2541]: E0913 00:16:58.714594 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.715008 kubelet[2541]: E0913 00:16:58.714992 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.715008 kubelet[2541]: W0913 00:16:58.715006 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.715094 kubelet[2541]: E0913 00:16:58.715029 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.715343 kubelet[2541]: E0913 00:16:58.715326 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.715399 kubelet[2541]: W0913 00:16:58.715350 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.715399 kubelet[2541]: E0913 00:16:58.715369 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.716040 kubelet[2541]: E0913 00:16:58.715903 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.716040 kubelet[2541]: W0913 00:16:58.715924 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.716317 kubelet[2541]: E0913 00:16:58.716299 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.716880 kubelet[2541]: E0913 00:16:58.716708 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.716880 kubelet[2541]: W0913 00:16:58.716721 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.716880 kubelet[2541]: E0913 00:16:58.716732 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.717160 kubelet[2541]: E0913 00:16:58.717131 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.717160 kubelet[2541]: W0913 00:16:58.717152 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.717275 kubelet[2541]: E0913 00:16:58.717168 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:58.739849 kubelet[2541]: E0913 00:16:58.739731 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:58.739849 kubelet[2541]: W0913 00:16:58.739760 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:58.739849 kubelet[2541]: E0913 00:16:58.739789 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.519509 kubelet[2541]: E0913 00:16:59.519445 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k54nv" podUID="5a380725-fcdb-4d95-86f1-35ff10380123" Sep 13 00:16:59.588720 kubelet[2541]: I0913 00:16:59.588664 2541 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:16:59.589271 kubelet[2541]: E0913 00:16:59.589120 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:59.639179 kubelet[2541]: E0913 00:16:59.639106 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.639179 kubelet[2541]: W0913 00:16:59.639145 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.639179 kubelet[2541]: E0913 00:16:59.639177 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.639543 kubelet[2541]: E0913 00:16:59.639506 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.639543 kubelet[2541]: W0913 00:16:59.639526 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.639543 kubelet[2541]: E0913 00:16:59.639541 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.639892 kubelet[2541]: E0913 00:16:59.639867 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.639892 kubelet[2541]: W0913 00:16:59.639884 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.639983 kubelet[2541]: E0913 00:16:59.639897 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.640269 kubelet[2541]: E0913 00:16:59.640234 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.640269 kubelet[2541]: W0913 00:16:59.640251 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.640269 kubelet[2541]: E0913 00:16:59.640263 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.640613 kubelet[2541]: E0913 00:16:59.640563 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.640613 kubelet[2541]: W0913 00:16:59.640584 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.640613 kubelet[2541]: E0913 00:16:59.640597 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.640889 kubelet[2541]: E0913 00:16:59.640865 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.640889 kubelet[2541]: W0913 00:16:59.640881 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.640984 kubelet[2541]: E0913 00:16:59.640893 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.641169 kubelet[2541]: E0913 00:16:59.641132 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.641169 kubelet[2541]: W0913 00:16:59.641148 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.641169 kubelet[2541]: E0913 00:16:59.641160 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.641469 kubelet[2541]: E0913 00:16:59.641441 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.641469 kubelet[2541]: W0913 00:16:59.641457 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.641469 kubelet[2541]: E0913 00:16:59.641470 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.641770 kubelet[2541]: E0913 00:16:59.641742 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.641770 kubelet[2541]: W0913 00:16:59.641759 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.641879 kubelet[2541]: E0913 00:16:59.641787 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.642100 kubelet[2541]: E0913 00:16:59.642076 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.642100 kubelet[2541]: W0913 00:16:59.642091 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.642185 kubelet[2541]: E0913 00:16:59.642105 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.642387 kubelet[2541]: E0913 00:16:59.642363 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.642387 kubelet[2541]: W0913 00:16:59.642378 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.642448 kubelet[2541]: E0913 00:16:59.642390 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.642667 kubelet[2541]: E0913 00:16:59.642642 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.642667 kubelet[2541]: W0913 00:16:59.642657 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.642743 kubelet[2541]: E0913 00:16:59.642668 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.642957 kubelet[2541]: E0913 00:16:59.642937 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.642957 kubelet[2541]: W0913 00:16:59.642951 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.643020 kubelet[2541]: E0913 00:16:59.642963 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.643371 kubelet[2541]: E0913 00:16:59.643317 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.643371 kubelet[2541]: W0913 00:16:59.643354 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.643610 kubelet[2541]: E0913 00:16:59.643399 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.643893 kubelet[2541]: E0913 00:16:59.643869 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.643943 kubelet[2541]: W0913 00:16:59.643889 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.643943 kubelet[2541]: E0913 00:16:59.643923 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.718639 kubelet[2541]: E0913 00:16:59.718591 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.718639 kubelet[2541]: W0913 00:16:59.718617 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.718639 kubelet[2541]: E0913 00:16:59.718640 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.718949 kubelet[2541]: E0913 00:16:59.718935 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.718949 kubelet[2541]: W0913 00:16:59.718945 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.719017 kubelet[2541]: E0913 00:16:59.718962 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.719444 kubelet[2541]: E0913 00:16:59.719397 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.719444 kubelet[2541]: W0913 00:16:59.719429 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.719541 kubelet[2541]: E0913 00:16:59.719466 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.719858 kubelet[2541]: E0913 00:16:59.719837 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.719858 kubelet[2541]: W0913 00:16:59.719853 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.719976 kubelet[2541]: E0913 00:16:59.719873 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.720159 kubelet[2541]: E0913 00:16:59.720130 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.720159 kubelet[2541]: W0913 00:16:59.720145 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.720234 kubelet[2541]: E0913 00:16:59.720165 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.720513 kubelet[2541]: E0913 00:16:59.720477 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.720513 kubelet[2541]: W0913 00:16:59.720491 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.720603 kubelet[2541]: E0913 00:16:59.720531 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.720800 kubelet[2541]: E0913 00:16:59.720770 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.720800 kubelet[2541]: W0913 00:16:59.720787 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.720898 kubelet[2541]: E0913 00:16:59.720861 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.721095 kubelet[2541]: E0913 00:16:59.721076 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.721095 kubelet[2541]: W0913 00:16:59.721091 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.721160 kubelet[2541]: E0913 00:16:59.721110 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.721404 kubelet[2541]: E0913 00:16:59.721376 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.721404 kubelet[2541]: W0913 00:16:59.721393 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.721483 kubelet[2541]: E0913 00:16:59.721412 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.721688 kubelet[2541]: E0913 00:16:59.721669 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.721688 kubelet[2541]: W0913 00:16:59.721683 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.721759 kubelet[2541]: E0913 00:16:59.721700 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.722009 kubelet[2541]: E0913 00:16:59.721990 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.722009 kubelet[2541]: W0913 00:16:59.722005 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.722092 kubelet[2541]: E0913 00:16:59.722022 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.722406 kubelet[2541]: E0913 00:16:59.722379 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.722406 kubelet[2541]: W0913 00:16:59.722398 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.722475 kubelet[2541]: E0913 00:16:59.722418 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.722786 kubelet[2541]: E0913 00:16:59.722761 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.722786 kubelet[2541]: W0913 00:16:59.722775 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.722888 kubelet[2541]: E0913 00:16:59.722793 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.723104 kubelet[2541]: E0913 00:16:59.723087 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.723104 kubelet[2541]: W0913 00:16:59.723102 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.723186 kubelet[2541]: E0913 00:16:59.723120 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.723403 kubelet[2541]: E0913 00:16:59.723387 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.723403 kubelet[2541]: W0913 00:16:59.723399 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.723477 kubelet[2541]: E0913 00:16:59.723415 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.723715 kubelet[2541]: E0913 00:16:59.723698 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.723715 kubelet[2541]: W0913 00:16:59.723711 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.723822 kubelet[2541]: E0913 00:16:59.723750 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.723994 kubelet[2541]: E0913 00:16:59.723977 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.723994 kubelet[2541]: W0913 00:16:59.723990 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.724059 kubelet[2541]: E0913 00:16:59.724009 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:59.724309 kubelet[2541]: E0913 00:16:59.724280 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:59.724309 kubelet[2541]: W0913 00:16:59.724306 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:59.724386 kubelet[2541]: E0913 00:16:59.724320 2541 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:00.496003 containerd[1450]: time="2025-09-13T00:17:00.495918694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:00.497933 containerd[1450]: time="2025-09-13T00:17:00.497880219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:17:00.499123 containerd[1450]: time="2025-09-13T00:17:00.499079512Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:00.502072 containerd[1450]: time="2025-09-13T00:17:00.502004540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:00.502719 containerd[1450]: time="2025-09-13T00:17:00.502680890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.353764242s" Sep 13 00:17:00.502719 containerd[1450]: time="2025-09-13T00:17:00.502714117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:17:00.505466 containerd[1450]: time="2025-09-13T00:17:00.505405096Z" level=info msg="CreateContainer within sandbox \"9b720162cc476f44b1596c691045cea32cdb307e35482a1c2f7bd96c62b4e60c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:17:00.527563 containerd[1450]: time="2025-09-13T00:17:00.527491416Z" level=info msg="CreateContainer within sandbox \"9b720162cc476f44b1596c691045cea32cdb307e35482a1c2f7bd96c62b4e60c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"42538b9e5453bd649d511937d46ae7266bd0fe4b8c724d8bed17dc4727a10fad\"" Sep 13 00:17:00.528253 containerd[1450]: time="2025-09-13T00:17:00.528214239Z" level=info msg="StartContainer for \"42538b9e5453bd649d511937d46ae7266bd0fe4b8c724d8bed17dc4727a10fad\"" Sep 13 00:17:00.571075 systemd[1]: Started cri-containerd-42538b9e5453bd649d511937d46ae7266bd0fe4b8c724d8bed17dc4727a10fad.scope - libcontainer container 42538b9e5453bd649d511937d46ae7266bd0fe4b8c724d8bed17dc4727a10fad. Sep 13 00:17:00.616661 containerd[1450]: time="2025-09-13T00:17:00.616612383Z" level=info msg="StartContainer for \"42538b9e5453bd649d511937d46ae7266bd0fe4b8c724d8bed17dc4727a10fad\" returns successfully" Sep 13 00:17:00.632387 systemd[1]: cri-containerd-42538b9e5453bd649d511937d46ae7266bd0fe4b8c724d8bed17dc4727a10fad.scope: Deactivated successfully. Sep 13 00:17:00.662060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42538b9e5453bd649d511937d46ae7266bd0fe4b8c724d8bed17dc4727a10fad-rootfs.mount: Deactivated successfully. Sep 13 00:17:01.132623 containerd[1450]: time="2025-09-13T00:17:01.132437690Z" level=info msg="shim disconnected" id=42538b9e5453bd649d511937d46ae7266bd0fe4b8c724d8bed17dc4727a10fad namespace=k8s.io Sep 13 00:17:01.132623 containerd[1450]: time="2025-09-13T00:17:01.132550856Z" level=warning msg="cleaning up after shim disconnected" id=42538b9e5453bd649d511937d46ae7266bd0fe4b8c724d8bed17dc4727a10fad namespace=k8s.io Sep 13 00:17:01.132623 containerd[1450]: time="2025-09-13T00:17:01.132568962Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:17:01.519623 kubelet[2541]: E0913 00:17:01.519404 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k54nv" podUID="5a380725-fcdb-4d95-86f1-35ff10380123" Sep 13 00:17:01.599472 containerd[1450]: time="2025-09-13T00:17:01.599410196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:17:01.829322 kubelet[2541]: I0913 00:17:01.829089 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c8ccd5f5f-h9k9d" podStartSLOduration=4.609549243 podStartE2EDuration="6.829054027s" podCreationTimestamp="2025-09-13 00:16:55 +0000 UTC" firstStartedPulling="2025-09-13 00:16:55.92917412 +0000 UTC m=+21.514776356" lastFinishedPulling="2025-09-13 00:16:58.148678904 +0000 UTC m=+23.734281140" observedRunningTime="2025-09-13 00:16:58.603012156 +0000 UTC m=+24.188614392" watchObservedRunningTime="2025-09-13 00:17:01.829054027 +0000 UTC m=+27.414656263" Sep 13 00:17:03.519729 kubelet[2541]: E0913 00:17:03.519624 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k54nv" podUID="5a380725-fcdb-4d95-86f1-35ff10380123" Sep 13 00:17:06.425166 kubelet[2541]: E0913 00:17:06.424645 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k54nv" podUID="5a380725-fcdb-4d95-86f1-35ff10380123" Sep 13 00:17:06.465600 containerd[1450]: time="2025-09-13T00:17:06.465518343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:06.466669 containerd[1450]: time="2025-09-13T00:17:06.466573611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:17:06.468124 containerd[1450]: time="2025-09-13T00:17:06.468084654Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:06.474347 containerd[1450]: time="2025-09-13T00:17:06.474272895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:06.475209 containerd[1450]: time="2025-09-13T00:17:06.475170309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.87571357s" Sep 13 00:17:06.475209 containerd[1450]: time="2025-09-13T00:17:06.475204036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:17:06.478390 containerd[1450]: time="2025-09-13T00:17:06.478324068Z" level=info msg="CreateContainer within sandbox \"9b720162cc476f44b1596c691045cea32cdb307e35482a1c2f7bd96c62b4e60c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:17:06.500077 containerd[1450]: time="2025-09-13T00:17:06.499989514Z" level=info msg="CreateContainer within sandbox \"9b720162cc476f44b1596c691045cea32cdb307e35482a1c2f7bd96c62b4e60c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e5221b7f5fe12135032a720014e8925dfc414edef20d3924cd907fc96020dcdd\"" Sep 13 00:17:06.502402 containerd[1450]: time="2025-09-13T00:17:06.500972778Z" level=info msg="StartContainer for \"e5221b7f5fe12135032a720014e8925dfc414edef20d3924cd907fc96020dcdd\"" Sep 13 00:17:06.544131 systemd[1]: Started cri-containerd-e5221b7f5fe12135032a720014e8925dfc414edef20d3924cd907fc96020dcdd.scope - libcontainer container e5221b7f5fe12135032a720014e8925dfc414edef20d3924cd907fc96020dcdd. Sep 13 00:17:06.587836 containerd[1450]: time="2025-09-13T00:17:06.587737776Z" level=info msg="StartContainer for \"e5221b7f5fe12135032a720014e8925dfc414edef20d3924cd907fc96020dcdd\" returns successfully" Sep 13 00:17:08.200186 containerd[1450]: time="2025-09-13T00:17:08.200126286Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:17:08.203405 systemd[1]: cri-containerd-e5221b7f5fe12135032a720014e8925dfc414edef20d3924cd907fc96020dcdd.scope: Deactivated successfully. Sep 13 00:17:08.226752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5221b7f5fe12135032a720014e8925dfc414edef20d3924cd907fc96020dcdd-rootfs.mount: Deactivated successfully. Sep 13 00:17:08.289744 kubelet[2541]: I0913 00:17:08.289696 2541 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:17:08.526476 systemd[1]: Created slice kubepods-besteffort-pod5a380725_fcdb_4d95_86f1_35ff10380123.slice - libcontainer container kubepods-besteffort-pod5a380725_fcdb_4d95_86f1_35ff10380123.slice. Sep 13 00:17:08.576641 containerd[1450]: time="2025-09-13T00:17:08.576559844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k54nv,Uid:5a380725-fcdb-4d95-86f1-35ff10380123,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:08.621619 containerd[1450]: time="2025-09-13T00:17:08.621543409Z" level=info msg="shim disconnected" id=e5221b7f5fe12135032a720014e8925dfc414edef20d3924cd907fc96020dcdd namespace=k8s.io Sep 13 00:17:08.621619 containerd[1450]: time="2025-09-13T00:17:08.621610462Z" level=warning msg="cleaning up after shim disconnected" id=e5221b7f5fe12135032a720014e8925dfc414edef20d3924cd907fc96020dcdd namespace=k8s.io Sep 13 00:17:08.621619 containerd[1450]: time="2025-09-13T00:17:08.621619079Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:17:08.662187 systemd[1]: Created slice kubepods-besteffort-podda51d102_bea9_4b73_b920_d878067a1d60.slice - libcontainer container kubepods-besteffort-podda51d102_bea9_4b73_b920_d878067a1d60.slice. Sep 13 00:17:08.692527 systemd[1]: Created slice kubepods-besteffort-podf43bb6c2_de90_41a2_b02c_8c9fc7f0992c.slice - libcontainer container kubepods-besteffort-podf43bb6c2_de90_41a2_b02c_8c9fc7f0992c.slice. Sep 13 00:17:08.697215 systemd[1]: Created slice kubepods-burstable-podbd84b29b_9cc2_416f_b838_56415fe1dcf0.slice - libcontainer container kubepods-burstable-podbd84b29b_9cc2_416f_b838_56415fe1dcf0.slice. Sep 13 00:17:08.702487 systemd[1]: Created slice kubepods-burstable-pod31ddf125_3a79_4bfa_9a91_12d895809873.slice - libcontainer container kubepods-burstable-pod31ddf125_3a79_4bfa_9a91_12d895809873.slice. Sep 13 00:17:08.707171 systemd[1]: Created slice kubepods-besteffort-pod052be84e_22d7_4ee1_a92b_8fcfaea0b69e.slice - libcontainer container kubepods-besteffort-pod052be84e_22d7_4ee1_a92b_8fcfaea0b69e.slice. Sep 13 00:17:08.711265 systemd[1]: Created slice kubepods-besteffort-pod7935845b_1239_4bbe_bb1a_5deeeba20ac8.slice - libcontainer container kubepods-besteffort-pod7935845b_1239_4bbe_bb1a_5deeeba20ac8.slice. Sep 13 00:17:08.715272 systemd[1]: Created slice kubepods-besteffort-podea66517c_8704_46d2_93ab_a6f19d47926d.slice - libcontainer container kubepods-besteffort-podea66517c_8704_46d2_93ab_a6f19d47926d.slice. Sep 13 00:17:08.742675 kubelet[2541]: I0913 00:17:08.742618 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31ddf125-3a79-4bfa-9a91-12d895809873-config-volume\") pod \"coredns-668d6bf9bc-22g6l\" (UID: \"31ddf125-3a79-4bfa-9a91-12d895809873\") " pod="kube-system/coredns-668d6bf9bc-22g6l" Sep 13 00:17:08.742675 kubelet[2541]: I0913 00:17:08.742672 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz7sp\" (UniqueName: \"kubernetes.io/projected/052be84e-22d7-4ee1-a92b-8fcfaea0b69e-kube-api-access-mz7sp\") pod \"calico-apiserver-5b4b4787d4-9z5h7\" (UID: \"052be84e-22d7-4ee1-a92b-8fcfaea0b69e\") " pod="calico-apiserver/calico-apiserver-5b4b4787d4-9z5h7" Sep 13 00:17:08.742905 kubelet[2541]: I0913 00:17:08.742694 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7935845b-1239-4bbe-bb1a-5deeeba20ac8-config\") pod \"goldmane-54d579b49d-xz856\" (UID: \"7935845b-1239-4bbe-bb1a-5deeeba20ac8\") " pod="calico-system/goldmane-54d579b49d-xz856" Sep 13 00:17:08.742905 kubelet[2541]: I0913 00:17:08.742710 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm7tg\" (UniqueName: \"kubernetes.io/projected/f43bb6c2-de90-41a2-b02c-8c9fc7f0992c-kube-api-access-hm7tg\") pod \"calico-kube-controllers-64cb648bfc-dpqwh\" (UID: \"f43bb6c2-de90-41a2-b02c-8c9fc7f0992c\") " pod="calico-system/calico-kube-controllers-64cb648bfc-dpqwh" Sep 13 00:17:08.742905 kubelet[2541]: I0913 00:17:08.742726 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c82qt\" (UniqueName: \"kubernetes.io/projected/31ddf125-3a79-4bfa-9a91-12d895809873-kube-api-access-c82qt\") pod \"coredns-668d6bf9bc-22g6l\" (UID: \"31ddf125-3a79-4bfa-9a91-12d895809873\") " pod="kube-system/coredns-668d6bf9bc-22g6l" Sep 13 00:17:08.742905 kubelet[2541]: I0913 00:17:08.742740 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da51d102-bea9-4b73-b920-d878067a1d60-whisker-ca-bundle\") pod \"whisker-79b496d88c-swmdn\" (UID: \"da51d102-bea9-4b73-b920-d878067a1d60\") " pod="calico-system/whisker-79b496d88c-swmdn" Sep 13 00:17:08.742905 kubelet[2541]: I0913 00:17:08.742755 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f43bb6c2-de90-41a2-b02c-8c9fc7f0992c-tigera-ca-bundle\") pod \"calico-kube-controllers-64cb648bfc-dpqwh\" (UID: \"f43bb6c2-de90-41a2-b02c-8c9fc7f0992c\") " pod="calico-system/calico-kube-controllers-64cb648bfc-dpqwh" Sep 13 00:17:08.743040 kubelet[2541]: I0913 00:17:08.742770 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd84b29b-9cc2-416f-b838-56415fe1dcf0-config-volume\") pod \"coredns-668d6bf9bc-d7v2c\" (UID: \"bd84b29b-9cc2-416f-b838-56415fe1dcf0\") " pod="kube-system/coredns-668d6bf9bc-d7v2c" Sep 13 00:17:08.743040 kubelet[2541]: I0913 00:17:08.742784 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da51d102-bea9-4b73-b920-d878067a1d60-whisker-backend-key-pair\") pod \"whisker-79b496d88c-swmdn\" (UID: \"da51d102-bea9-4b73-b920-d878067a1d60\") " pod="calico-system/whisker-79b496d88c-swmdn" Sep 13 00:17:08.743040 kubelet[2541]: I0913 00:17:08.742826 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnwvd\" (UniqueName: \"kubernetes.io/projected/bd84b29b-9cc2-416f-b838-56415fe1dcf0-kube-api-access-hnwvd\") pod \"coredns-668d6bf9bc-d7v2c\" (UID: \"bd84b29b-9cc2-416f-b838-56415fe1dcf0\") " pod="kube-system/coredns-668d6bf9bc-d7v2c" Sep 13 00:17:08.743040 kubelet[2541]: I0913 00:17:08.742850 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7935845b-1239-4bbe-bb1a-5deeeba20ac8-goldmane-key-pair\") pod \"goldmane-54d579b49d-xz856\" (UID: \"7935845b-1239-4bbe-bb1a-5deeeba20ac8\") " pod="calico-system/goldmane-54d579b49d-xz856" Sep 13 00:17:08.743040 kubelet[2541]: I0913 00:17:08.742871 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-265hl\" (UniqueName: \"kubernetes.io/projected/ea66517c-8704-46d2-93ab-a6f19d47926d-kube-api-access-265hl\") pod \"calico-apiserver-5b4b4787d4-c79sh\" (UID: \"ea66517c-8704-46d2-93ab-a6f19d47926d\") " pod="calico-apiserver/calico-apiserver-5b4b4787d4-c79sh" Sep 13 00:17:08.743195 kubelet[2541]: I0913 00:17:08.742892 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7935845b-1239-4bbe-bb1a-5deeeba20ac8-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-xz856\" (UID: \"7935845b-1239-4bbe-bb1a-5deeeba20ac8\") " pod="calico-system/goldmane-54d579b49d-xz856" Sep 13 00:17:08.743195 kubelet[2541]: I0913 00:17:08.742913 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pphxn\" (UniqueName: \"kubernetes.io/projected/7935845b-1239-4bbe-bb1a-5deeeba20ac8-kube-api-access-pphxn\") pod \"goldmane-54d579b49d-xz856\" (UID: \"7935845b-1239-4bbe-bb1a-5deeeba20ac8\") " pod="calico-system/goldmane-54d579b49d-xz856" Sep 13 00:17:08.743195 kubelet[2541]: I0913 00:17:08.742939 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea66517c-8704-46d2-93ab-a6f19d47926d-calico-apiserver-certs\") pod \"calico-apiserver-5b4b4787d4-c79sh\" (UID: \"ea66517c-8704-46d2-93ab-a6f19d47926d\") " pod="calico-apiserver/calico-apiserver-5b4b4787d4-c79sh" Sep 13 00:17:08.743195 kubelet[2541]: I0913 00:17:08.742963 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/052be84e-22d7-4ee1-a92b-8fcfaea0b69e-calico-apiserver-certs\") pod \"calico-apiserver-5b4b4787d4-9z5h7\" (UID: \"052be84e-22d7-4ee1-a92b-8fcfaea0b69e\") " pod="calico-apiserver/calico-apiserver-5b4b4787d4-9z5h7" Sep 13 00:17:08.743195 kubelet[2541]: I0913 00:17:08.742979 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xkhk\" (UniqueName: \"kubernetes.io/projected/da51d102-bea9-4b73-b920-d878067a1d60-kube-api-access-4xkhk\") pod \"whisker-79b496d88c-swmdn\" (UID: \"da51d102-bea9-4b73-b920-d878067a1d60\") " pod="calico-system/whisker-79b496d88c-swmdn" Sep 13 00:17:08.808460 containerd[1450]: time="2025-09-13T00:17:08.808268922Z" level=error msg="Failed to destroy network for sandbox \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:08.809302 containerd[1450]: time="2025-09-13T00:17:08.808854706Z" level=error msg="encountered an error cleaning up failed sandbox \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:08.814626 containerd[1450]: time="2025-09-13T00:17:08.814542208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k54nv,Uid:5a380725-fcdb-4d95-86f1-35ff10380123,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:08.814591 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f-shm.mount: Deactivated successfully. Sep 13 00:17:08.815158 kubelet[2541]: E0913 00:17:08.815083 2541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:08.815271 kubelet[2541]: E0913 00:17:08.815230 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k54nv" Sep 13 00:17:08.815441 kubelet[2541]: E0913 00:17:08.815296 2541 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k54nv" Sep 13 00:17:08.815441 kubelet[2541]: E0913 00:17:08.815349 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k54nv_calico-system(5a380725-fcdb-4d95-86f1-35ff10380123)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k54nv_calico-system(5a380725-fcdb-4d95-86f1-35ff10380123)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k54nv" podUID="5a380725-fcdb-4d95-86f1-35ff10380123" Sep 13 00:17:09.021211 kubelet[2541]: E0913 00:17:09.020891 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:09.021211 kubelet[2541]: E0913 00:17:09.021118 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:09.021396 containerd[1450]: time="2025-09-13T00:17:09.021173845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cb648bfc-dpqwh,Uid:f43bb6c2-de90-41a2-b02c-8c9fc7f0992c,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:09.022030 containerd[1450]: time="2025-09-13T00:17:09.021181260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b496d88c-swmdn,Uid:da51d102-bea9-4b73-b920-d878067a1d60,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:09.022030 containerd[1450]: time="2025-09-13T00:17:09.021954904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4b4787d4-c79sh,Uid:ea66517c-8704-46d2-93ab-a6f19d47926d,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:17:09.022313 containerd[1450]: time="2025-09-13T00:17:09.022125183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-22g6l,Uid:31ddf125-3a79-4bfa-9a91-12d895809873,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:09.022313 containerd[1450]: time="2025-09-13T00:17:09.022256804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d7v2c,Uid:bd84b29b-9cc2-416f-b838-56415fe1dcf0,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:09.022531 containerd[1450]: time="2025-09-13T00:17:09.022447151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xz856,Uid:7935845b-1239-4bbe-bb1a-5deeeba20ac8,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:09.320322 containerd[1450]: time="2025-09-13T00:17:09.320184966Z" level=error msg="Failed to destroy network for sandbox \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.323543 containerd[1450]: time="2025-09-13T00:17:09.322021552Z" level=error msg="encountered an error cleaning up failed sandbox \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.323543 containerd[1450]: time="2025-09-13T00:17:09.322090929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cb648bfc-dpqwh,Uid:f43bb6c2-de90-41a2-b02c-8c9fc7f0992c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.323709 kubelet[2541]: E0913 00:17:09.323258 2541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.323709 kubelet[2541]: E0913 00:17:09.323337 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64cb648bfc-dpqwh" Sep 13 00:17:09.323709 kubelet[2541]: E0913 00:17:09.323367 2541 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64cb648bfc-dpqwh" Sep 13 00:17:09.324057 kubelet[2541]: E0913 00:17:09.323420 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64cb648bfc-dpqwh_calico-system(f43bb6c2-de90-41a2-b02c-8c9fc7f0992c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64cb648bfc-dpqwh_calico-system(f43bb6c2-de90-41a2-b02c-8c9fc7f0992c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64cb648bfc-dpqwh" podUID="f43bb6c2-de90-41a2-b02c-8c9fc7f0992c" Sep 13 00:17:09.324566 containerd[1450]: time="2025-09-13T00:17:09.324523938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4b4787d4-9z5h7,Uid:052be84e-22d7-4ee1-a92b-8fcfaea0b69e,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:17:09.325491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252-shm.mount: Deactivated successfully. Sep 13 00:17:09.331820 containerd[1450]: time="2025-09-13T00:17:09.331762816Z" level=error msg="Failed to destroy network for sandbox \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.334239 containerd[1450]: time="2025-09-13T00:17:09.334209853Z" level=error msg="encountered an error cleaning up failed sandbox \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.334366 containerd[1450]: time="2025-09-13T00:17:09.334339710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4b4787d4-c79sh,Uid:ea66517c-8704-46d2-93ab-a6f19d47926d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.334651 kubelet[2541]: E0913 00:17:09.334600 2541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.334895 kubelet[2541]: E0913 00:17:09.334757 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b4b4787d4-c79sh" Sep 13 00:17:09.334895 kubelet[2541]: E0913 00:17:09.334786 2541 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b4b4787d4-c79sh" Sep 13 00:17:09.334895 kubelet[2541]: E0913 00:17:09.334852 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b4b4787d4-c79sh_calico-apiserver(ea66517c-8704-46d2-93ab-a6f19d47926d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b4b4787d4-c79sh_calico-apiserver(ea66517c-8704-46d2-93ab-a6f19d47926d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b4b4787d4-c79sh" podUID="ea66517c-8704-46d2-93ab-a6f19d47926d" Sep 13 00:17:09.336151 containerd[1450]: time="2025-09-13T00:17:09.336112018Z" level=error msg="Failed to destroy network for sandbox \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.336774 containerd[1450]: time="2025-09-13T00:17:09.336732941Z" level=error msg="encountered an error cleaning up failed sandbox \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.336845 containerd[1450]: time="2025-09-13T00:17:09.336771337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b496d88c-swmdn,Uid:da51d102-bea9-4b73-b920-d878067a1d60,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.336980 kubelet[2541]: E0913 00:17:09.336958 2541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.337015 kubelet[2541]: E0913 00:17:09.336990 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79b496d88c-swmdn" Sep 13 00:17:09.337015 kubelet[2541]: E0913 00:17:09.337006 2541 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79b496d88c-swmdn" Sep 13 00:17:09.337094 kubelet[2541]: E0913 00:17:09.337033 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79b496d88c-swmdn_calico-system(da51d102-bea9-4b73-b920-d878067a1d60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79b496d88c-swmdn_calico-system(da51d102-bea9-4b73-b920-d878067a1d60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79b496d88c-swmdn" podUID="da51d102-bea9-4b73-b920-d878067a1d60" Sep 13 00:17:09.337147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b-shm.mount: Deactivated successfully. Sep 13 00:17:09.341680 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc-shm.mount: Deactivated successfully. Sep 13 00:17:09.351035 containerd[1450]: time="2025-09-13T00:17:09.350970609Z" level=error msg="Failed to destroy network for sandbox \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.352058 containerd[1450]: time="2025-09-13T00:17:09.352027106Z" level=error msg="encountered an error cleaning up failed sandbox \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.354459 containerd[1450]: time="2025-09-13T00:17:09.352184027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xz856,Uid:7935845b-1239-4bbe-bb1a-5deeeba20ac8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.354957 kubelet[2541]: E0913 00:17:09.354903 2541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.355247 kubelet[2541]: E0913 00:17:09.355098 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-xz856" Sep 13 00:17:09.355247 kubelet[2541]: E0913 00:17:09.355143 2541 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-xz856" Sep 13 00:17:09.355247 kubelet[2541]: E0913 00:17:09.355197 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-xz856_calico-system(7935845b-1239-4bbe-bb1a-5deeeba20ac8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-xz856_calico-system(7935845b-1239-4bbe-bb1a-5deeeba20ac8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-xz856" podUID="7935845b-1239-4bbe-bb1a-5deeeba20ac8" Sep 13 00:17:09.356557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4-shm.mount: Deactivated successfully. Sep 13 00:17:09.366421 containerd[1450]: time="2025-09-13T00:17:09.366353892Z" level=error msg="Failed to destroy network for sandbox \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.367226 containerd[1450]: time="2025-09-13T00:17:09.367194159Z" level=error msg="encountered an error cleaning up failed sandbox \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.369868 containerd[1450]: time="2025-09-13T00:17:09.369833838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-22g6l,Uid:31ddf125-3a79-4bfa-9a91-12d895809873,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.370272 kubelet[2541]: E0913 00:17:09.370216 2541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.370459 kubelet[2541]: E0913 00:17:09.370415 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-22g6l" Sep 13 00:17:09.370582 kubelet[2541]: E0913 00:17:09.370558 2541 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-22g6l" Sep 13 00:17:09.370758 kubelet[2541]: E0913 00:17:09.370705 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-22g6l_kube-system(31ddf125-3a79-4bfa-9a91-12d895809873)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-22g6l_kube-system(31ddf125-3a79-4bfa-9a91-12d895809873)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-22g6l" podUID="31ddf125-3a79-4bfa-9a91-12d895809873" Sep 13 00:17:09.386118 containerd[1450]: time="2025-09-13T00:17:09.386030425Z" level=error msg="Failed to destroy network for sandbox \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.386982 containerd[1450]: time="2025-09-13T00:17:09.386939059Z" level=error msg="encountered an error cleaning up failed sandbox \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.387178 containerd[1450]: time="2025-09-13T00:17:09.387118033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d7v2c,Uid:bd84b29b-9cc2-416f-b838-56415fe1dcf0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.387582 kubelet[2541]: E0913 00:17:09.387534 2541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.387812 kubelet[2541]: E0913 00:17:09.387771 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d7v2c" Sep 13 00:17:09.387932 kubelet[2541]: E0913 00:17:09.387910 2541 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d7v2c" Sep 13 00:17:09.388168 kubelet[2541]: E0913 00:17:09.388057 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d7v2c_kube-system(bd84b29b-9cc2-416f-b838-56415fe1dcf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d7v2c_kube-system(bd84b29b-9cc2-416f-b838-56415fe1dcf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d7v2c" podUID="bd84b29b-9cc2-416f-b838-56415fe1dcf0" Sep 13 00:17:09.450587 kubelet[2541]: I0913 00:17:09.450548 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Sep 13 00:17:09.452609 kubelet[2541]: I0913 00:17:09.452579 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Sep 13 00:17:09.459360 kubelet[2541]: I0913 00:17:09.459308 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Sep 13 00:17:09.468224 kubelet[2541]: I0913 00:17:09.467264 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Sep 13 00:17:09.496554 containerd[1450]: time="2025-09-13T00:17:09.496339604Z" level=info msg="StopPodSandbox for \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\"" Sep 13 00:17:09.497787 containerd[1450]: time="2025-09-13T00:17:09.497726617Z" level=info msg="StopPodSandbox for \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\"" Sep 13 00:17:09.498316 containerd[1450]: time="2025-09-13T00:17:09.498268894Z" level=info msg="StopPodSandbox for \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\"" Sep 13 00:17:09.499270 kubelet[2541]: I0913 00:17:09.498628 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Sep 13 00:17:09.499514 containerd[1450]: time="2025-09-13T00:17:09.498283693Z" level=info msg="StopPodSandbox for \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\"" Sep 13 00:17:09.506962 containerd[1450]: time="2025-09-13T00:17:09.506714567Z" level=info msg="Ensure that sandbox 4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe in task-service has been cleanup successfully" Sep 13 00:17:09.507291 containerd[1450]: time="2025-09-13T00:17:09.507232194Z" level=info msg="Ensure that sandbox 6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc in task-service has been cleanup successfully" Sep 13 00:17:09.509866 containerd[1450]: time="2025-09-13T00:17:09.508751730Z" level=info msg="StopPodSandbox for \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\"" Sep 13 00:17:09.509866 containerd[1450]: time="2025-09-13T00:17:09.509050072Z" level=info msg="Ensure that sandbox ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b in task-service has been cleanup successfully" Sep 13 00:17:09.509866 containerd[1450]: time="2025-09-13T00:17:09.509666876Z" level=info msg="Ensure that sandbox fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f in task-service has been cleanup successfully" Sep 13 00:17:09.520490 containerd[1450]: time="2025-09-13T00:17:09.520415160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:17:09.529605 kubelet[2541]: I0913 00:17:09.527189 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Sep 13 00:17:09.533360 containerd[1450]: time="2025-09-13T00:17:09.532039842Z" level=info msg="StopPodSandbox for \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\"" Sep 13 00:17:09.533360 containerd[1450]: time="2025-09-13T00:17:09.532471779Z" level=info msg="Ensure that sandbox aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4 in task-service has been cleanup successfully" Sep 13 00:17:09.534355 containerd[1450]: time="2025-09-13T00:17:09.534316140Z" level=info msg="Ensure that sandbox 6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252 in task-service has been cleanup successfully" Sep 13 00:17:09.540301 kubelet[2541]: I0913 00:17:09.540256 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Sep 13 00:17:09.548286 containerd[1450]: time="2025-09-13T00:17:09.548222250Z" level=info msg="StopPodSandbox for \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\"" Sep 13 00:17:09.548562 containerd[1450]: time="2025-09-13T00:17:09.548528559Z" level=info msg="Ensure that sandbox fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1 in task-service has been cleanup successfully" Sep 13 00:17:09.563478 containerd[1450]: time="2025-09-13T00:17:09.563425956Z" level=error msg="Failed to destroy network for sandbox \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.564063 containerd[1450]: time="2025-09-13T00:17:09.564028162Z" level=error msg="encountered an error cleaning up failed sandbox \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.564213 containerd[1450]: time="2025-09-13T00:17:09.564188630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4b4787d4-9z5h7,Uid:052be84e-22d7-4ee1-a92b-8fcfaea0b69e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.564617 kubelet[2541]: E0913 00:17:09.564567 2541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.565269 kubelet[2541]: E0913 00:17:09.564878 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b4b4787d4-9z5h7" Sep 13 00:17:09.565269 kubelet[2541]: E0913 00:17:09.564909 2541 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b4b4787d4-9z5h7" Sep 13 00:17:09.565269 kubelet[2541]: E0913 00:17:09.564964 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b4b4787d4-9z5h7_calico-apiserver(052be84e-22d7-4ee1-a92b-8fcfaea0b69e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b4b4787d4-9z5h7_calico-apiserver(052be84e-22d7-4ee1-a92b-8fcfaea0b69e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b4b4787d4-9z5h7" podUID="052be84e-22d7-4ee1-a92b-8fcfaea0b69e" Sep 13 00:17:09.612608 containerd[1450]: time="2025-09-13T00:17:09.611546785Z" level=error msg="StopPodSandbox for \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\" failed" error="failed to destroy network for sandbox \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.612865 kubelet[2541]: E0913 00:17:09.612048 2541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Sep 13 00:17:09.612865 kubelet[2541]: E0913 00:17:09.612159 2541 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe"} Sep 13 00:17:09.612865 kubelet[2541]: E0913 00:17:09.612301 2541 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31ddf125-3a79-4bfa-9a91-12d895809873\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:09.612865 kubelet[2541]: E0913 00:17:09.612336 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31ddf125-3a79-4bfa-9a91-12d895809873\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-22g6l" podUID="31ddf125-3a79-4bfa-9a91-12d895809873" Sep 13 00:17:09.625216 containerd[1450]: time="2025-09-13T00:17:09.623558927Z" level=error msg="StopPodSandbox for \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\" failed" error="failed to destroy network for sandbox \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.625365 kubelet[2541]: E0913 00:17:09.623932 2541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Sep 13 00:17:09.625365 kubelet[2541]: E0913 00:17:09.624001 2541 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4"} Sep 13 00:17:09.625365 kubelet[2541]: E0913 00:17:09.624063 2541 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7935845b-1239-4bbe-bb1a-5deeeba20ac8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:09.625365 kubelet[2541]: E0913 00:17:09.624097 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7935845b-1239-4bbe-bb1a-5deeeba20ac8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-xz856" podUID="7935845b-1239-4bbe-bb1a-5deeeba20ac8" Sep 13 00:17:09.626267 containerd[1450]: time="2025-09-13T00:17:09.626221702Z" level=error msg="StopPodSandbox for \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\" failed" error="failed to destroy network for sandbox \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.626323 containerd[1450]: time="2025-09-13T00:17:09.626273415Z" level=error msg="StopPodSandbox for \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\" failed" error="failed to destroy network for sandbox \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.626390 kubelet[2541]: E0913 00:17:09.626364 2541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Sep 13 00:17:09.626445 kubelet[2541]: E0913 00:17:09.626393 2541 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1"} Sep 13 00:17:09.626445 kubelet[2541]: E0913 00:17:09.626417 2541 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd84b29b-9cc2-416f-b838-56415fe1dcf0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:09.626445 kubelet[2541]: E0913 00:17:09.626435 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd84b29b-9cc2-416f-b838-56415fe1dcf0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d7v2c" podUID="bd84b29b-9cc2-416f-b838-56415fe1dcf0" Sep 13 00:17:09.626667 kubelet[2541]: E0913 00:17:09.626567 2541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Sep 13 00:17:09.626667 kubelet[2541]: E0913 00:17:09.626657 2541 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f"} Sep 13 00:17:09.626730 kubelet[2541]: E0913 00:17:09.626678 2541 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a380725-fcdb-4d95-86f1-35ff10380123\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:09.626730 kubelet[2541]: E0913 00:17:09.626699 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a380725-fcdb-4d95-86f1-35ff10380123\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k54nv" podUID="5a380725-fcdb-4d95-86f1-35ff10380123" Sep 13 00:17:09.631092 containerd[1450]: time="2025-09-13T00:17:09.631027040Z" level=error msg="StopPodSandbox for \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\" failed" error="failed to destroy network for sandbox \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.631644 kubelet[2541]: E0913 00:17:09.631595 2541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Sep 13 00:17:09.631720 kubelet[2541]: E0913 00:17:09.631655 2541 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252"} Sep 13 00:17:09.631720 kubelet[2541]: E0913 00:17:09.631689 2541 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f43bb6c2-de90-41a2-b02c-8c9fc7f0992c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:09.631720 kubelet[2541]: E0913 00:17:09.631711 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f43bb6c2-de90-41a2-b02c-8c9fc7f0992c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64cb648bfc-dpqwh" podUID="f43bb6c2-de90-41a2-b02c-8c9fc7f0992c" Sep 13 00:17:09.638476 containerd[1450]: time="2025-09-13T00:17:09.638418179Z" level=error msg="StopPodSandbox for \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\" failed" error="failed to destroy network for sandbox \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.638730 kubelet[2541]: E0913 00:17:09.638688 2541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Sep 13 00:17:09.638931 kubelet[2541]: E0913 00:17:09.638743 2541 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc"} Sep 13 00:17:09.638931 kubelet[2541]: E0913 00:17:09.638776 2541 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da51d102-bea9-4b73-b920-d878067a1d60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:09.638931 kubelet[2541]: E0913 00:17:09.638817 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da51d102-bea9-4b73-b920-d878067a1d60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79b496d88c-swmdn" podUID="da51d102-bea9-4b73-b920-d878067a1d60" Sep 13 00:17:09.641470 containerd[1450]: time="2025-09-13T00:17:09.641389958Z" level=error msg="StopPodSandbox for \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\" failed" error="failed to destroy network for sandbox \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:09.641730 kubelet[2541]: E0913 00:17:09.641691 2541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Sep 13 00:17:09.641730 kubelet[2541]: E0913 00:17:09.641734 2541 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b"} Sep 13 00:17:09.641839 kubelet[2541]: E0913 00:17:09.641765 2541 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea66517c-8704-46d2-93ab-a6f19d47926d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:09.641839 kubelet[2541]: E0913 00:17:09.641793 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea66517c-8704-46d2-93ab-a6f19d47926d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b4b4787d4-c79sh" podUID="ea66517c-8704-46d2-93ab-a6f19d47926d" Sep 13 00:17:10.228600 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe-shm.mount: Deactivated successfully. Sep 13 00:17:10.228740 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1-shm.mount: Deactivated successfully. Sep 13 00:17:10.545425 kubelet[2541]: I0913 00:17:10.545134 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Sep 13 00:17:10.546021 containerd[1450]: time="2025-09-13T00:17:10.545943823Z" level=info msg="StopPodSandbox for \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\"" Sep 13 00:17:10.555637 containerd[1450]: time="2025-09-13T00:17:10.555560891Z" level=info msg="Ensure that sandbox b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a in task-service has been cleanup successfully" Sep 13 00:17:10.590186 containerd[1450]: time="2025-09-13T00:17:10.589671115Z" level=error msg="StopPodSandbox for \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\" failed" error="failed to destroy network for sandbox \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:10.593123 kubelet[2541]: E0913 00:17:10.593041 2541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Sep 13 00:17:10.593123 kubelet[2541]: E0913 00:17:10.593120 2541 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a"} Sep 13 00:17:10.593411 kubelet[2541]: E0913 00:17:10.593168 2541 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"052be84e-22d7-4ee1-a92b-8fcfaea0b69e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:17:10.593411 kubelet[2541]: E0913 00:17:10.593222 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"052be84e-22d7-4ee1-a92b-8fcfaea0b69e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b4b4787d4-9z5h7" podUID="052be84e-22d7-4ee1-a92b-8fcfaea0b69e" Sep 13 00:17:16.416538 kubelet[2541]: I0913 00:17:16.414333 2541 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:17:16.416538 kubelet[2541]: E0913 00:17:16.414756 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:17.421963 kubelet[2541]: E0913 00:17:17.421922 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:17.652132 systemd[1]: Started sshd@7-10.0.0.139:22-10.0.0.1:40100.service - OpenSSH per-connection server daemon (10.0.0.1:40100). Sep 13 00:17:17.740834 sshd[3799]: Accepted publickey for core from 10.0.0.1 port 40100 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:17:17.742849 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:17.751832 systemd-logind[1432]: New session 8 of user core. Sep 13 00:17:17.759983 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:17:18.117698 sshd[3799]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:18.122624 systemd[1]: sshd@7-10.0.0.139:22-10.0.0.1:40100.service: Deactivated successfully. Sep 13 00:17:18.126303 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:17:18.127875 systemd-logind[1432]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:17:18.129115 systemd-logind[1432]: Removed session 8. Sep 13 00:17:18.817900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773170472.mount: Deactivated successfully. Sep 13 00:17:20.548157 containerd[1450]: time="2025-09-13T00:17:20.548067025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:20.550451 containerd[1450]: time="2025-09-13T00:17:20.550340211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:17:20.553258 containerd[1450]: time="2025-09-13T00:17:20.553193939Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:20.557092 containerd[1450]: time="2025-09-13T00:17:20.557046290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:20.557668 containerd[1450]: time="2025-09-13T00:17:20.557632642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 11.037167535s" Sep 13 00:17:20.557668 containerd[1450]: time="2025-09-13T00:17:20.557664812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:17:20.568082 containerd[1450]: time="2025-09-13T00:17:20.567450044Z" level=info msg="CreateContainer within sandbox \"9b720162cc476f44b1596c691045cea32cdb307e35482a1c2f7bd96c62b4e60c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:17:20.590879 containerd[1450]: time="2025-09-13T00:17:20.590821786Z" level=info msg="CreateContainer within sandbox \"9b720162cc476f44b1596c691045cea32cdb307e35482a1c2f7bd96c62b4e60c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"aca88f7182193913fbf2869ed8f91b73114a7e12c8be93a2b15d5d618914b10b\"" Sep 13 00:17:20.591708 containerd[1450]: time="2025-09-13T00:17:20.591487885Z" level=info msg="StartContainer for \"aca88f7182193913fbf2869ed8f91b73114a7e12c8be93a2b15d5d618914b10b\"" Sep 13 00:17:20.649148 systemd[1]: Started cri-containerd-aca88f7182193913fbf2869ed8f91b73114a7e12c8be93a2b15d5d618914b10b.scope - libcontainer container aca88f7182193913fbf2869ed8f91b73114a7e12c8be93a2b15d5d618914b10b. Sep 13 00:17:20.796646 containerd[1450]: time="2025-09-13T00:17:20.796566530Z" level=info msg="StartContainer for \"aca88f7182193913fbf2869ed8f91b73114a7e12c8be93a2b15d5d618914b10b\" returns successfully" Sep 13 00:17:20.833988 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:17:20.835357 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:17:20.931199 containerd[1450]: time="2025-09-13T00:17:20.931134620Z" level=info msg="StopPodSandbox for \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\"" Sep 13 00:17:21.129790 containerd[1450]: 2025-09-13 00:17:21.007 [INFO][3881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Sep 13 00:17:21.129790 containerd[1450]: 2025-09-13 00:17:21.008 [INFO][3881] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" iface="eth0" netns="/var/run/netns/cni-19644750-6835-d870-ea18-112622308191" Sep 13 00:17:21.129790 containerd[1450]: 2025-09-13 00:17:21.009 [INFO][3881] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" iface="eth0" netns="/var/run/netns/cni-19644750-6835-d870-ea18-112622308191" Sep 13 00:17:21.129790 containerd[1450]: 2025-09-13 00:17:21.010 [INFO][3881] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" iface="eth0" netns="/var/run/netns/cni-19644750-6835-d870-ea18-112622308191" Sep 13 00:17:21.129790 containerd[1450]: 2025-09-13 00:17:21.010 [INFO][3881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Sep 13 00:17:21.129790 containerd[1450]: 2025-09-13 00:17:21.011 [INFO][3881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Sep 13 00:17:21.129790 containerd[1450]: 2025-09-13 00:17:21.103 [INFO][3891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" HandleID="k8s-pod-network.6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Workload="localhost-k8s-whisker--79b496d88c--swmdn-eth0" Sep 13 00:17:21.129790 containerd[1450]: 2025-09-13 00:17:21.108 [INFO][3891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:21.129790 containerd[1450]: 2025-09-13 00:17:21.108 [INFO][3891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:21.129790 containerd[1450]: 2025-09-13 00:17:21.118 [WARNING][3891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" HandleID="k8s-pod-network.6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Workload="localhost-k8s-whisker--79b496d88c--swmdn-eth0" Sep 13 00:17:21.129790 containerd[1450]: 2025-09-13 00:17:21.118 [INFO][3891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" HandleID="k8s-pod-network.6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Workload="localhost-k8s-whisker--79b496d88c--swmdn-eth0" Sep 13 00:17:21.129790 containerd[1450]: 2025-09-13 00:17:21.121 [INFO][3891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:21.129790 containerd[1450]: 2025-09-13 00:17:21.124 [INFO][3881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Sep 13 00:17:21.130539 containerd[1450]: time="2025-09-13T00:17:21.130278183Z" level=info msg="TearDown network for sandbox \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\" successfully" Sep 13 00:17:21.130539 containerd[1450]: time="2025-09-13T00:17:21.130317987Z" level=info msg="StopPodSandbox for \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\" returns successfully" Sep 13 00:17:21.176057 kubelet[2541]: I0913 00:17:21.175990 2541 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da51d102-bea9-4b73-b920-d878067a1d60-whisker-backend-key-pair\") pod \"da51d102-bea9-4b73-b920-d878067a1d60\" (UID: \"da51d102-bea9-4b73-b920-d878067a1d60\") " Sep 13 00:17:21.176057 kubelet[2541]: I0913 00:17:21.176048 2541 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da51d102-bea9-4b73-b920-d878067a1d60-whisker-ca-bundle\") pod \"da51d102-bea9-4b73-b920-d878067a1d60\" (UID: \"da51d102-bea9-4b73-b920-d878067a1d60\") " Sep 13 00:17:21.176057 kubelet[2541]: I0913 00:17:21.176074 2541 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xkhk\" (UniqueName: \"kubernetes.io/projected/da51d102-bea9-4b73-b920-d878067a1d60-kube-api-access-4xkhk\") pod \"da51d102-bea9-4b73-b920-d878067a1d60\" (UID: \"da51d102-bea9-4b73-b920-d878067a1d60\") " Sep 13 00:17:21.176826 kubelet[2541]: I0913 00:17:21.176743 2541 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da51d102-bea9-4b73-b920-d878067a1d60-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "da51d102-bea9-4b73-b920-d878067a1d60" (UID: "da51d102-bea9-4b73-b920-d878067a1d60"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:17:21.182226 kubelet[2541]: I0913 00:17:21.182064 2541 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da51d102-bea9-4b73-b920-d878067a1d60-kube-api-access-4xkhk" (OuterVolumeSpecName: "kube-api-access-4xkhk") pod "da51d102-bea9-4b73-b920-d878067a1d60" (UID: "da51d102-bea9-4b73-b920-d878067a1d60"). InnerVolumeSpecName "kube-api-access-4xkhk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:17:21.182226 kubelet[2541]: I0913 00:17:21.182211 2541 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da51d102-bea9-4b73-b920-d878067a1d60-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "da51d102-bea9-4b73-b920-d878067a1d60" (UID: "da51d102-bea9-4b73-b920-d878067a1d60"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:17:21.277287 kubelet[2541]: I0913 00:17:21.277221 2541 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da51d102-bea9-4b73-b920-d878067a1d60-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:17:21.277287 kubelet[2541]: I0913 00:17:21.277270 2541 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da51d102-bea9-4b73-b920-d878067a1d60-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:17:21.277287 kubelet[2541]: I0913 00:17:21.277280 2541 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4xkhk\" (UniqueName: \"kubernetes.io/projected/da51d102-bea9-4b73-b920-d878067a1d60-kube-api-access-4xkhk\") on node \"localhost\" DevicePath \"\"" Sep 13 00:17:21.440322 systemd[1]: Removed slice kubepods-besteffort-podda51d102_bea9_4b73_b920_d878067a1d60.slice - libcontainer container kubepods-besteffort-podda51d102_bea9_4b73_b920_d878067a1d60.slice. Sep 13 00:17:21.470093 kubelet[2541]: I0913 00:17:21.470022 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zpcg7" podStartSLOduration=2.198815083 podStartE2EDuration="26.470001456s" podCreationTimestamp="2025-09-13 00:16:55 +0000 UTC" firstStartedPulling="2025-09-13 00:16:56.287393889 +0000 UTC m=+21.872996125" lastFinishedPulling="2025-09-13 00:17:20.558580272 +0000 UTC m=+46.144182498" observedRunningTime="2025-09-13 00:17:21.46890588 +0000 UTC m=+47.054508126" watchObservedRunningTime="2025-09-13 00:17:21.470001456 +0000 UTC m=+47.055603692" Sep 13 00:17:21.521830 containerd[1450]: time="2025-09-13T00:17:21.521229129Z" level=info msg="StopPodSandbox for \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\"" Sep 13 00:17:21.524015 containerd[1450]: time="2025-09-13T00:17:21.523927538Z" level=info msg="StopPodSandbox for \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\"" Sep 13 00:17:21.552295 systemd[1]: Created slice kubepods-besteffort-pod8ed6bdef_b7b8_4a38_9a8e_375d7ba7c712.slice - libcontainer container kubepods-besteffort-pod8ed6bdef_b7b8_4a38_9a8e_375d7ba7c712.slice. Sep 13 00:17:21.565339 systemd[1]: run-netns-cni\x2d19644750\x2d6835\x2dd870\x2dea18\x2d112622308191.mount: Deactivated successfully. Sep 13 00:17:21.565749 systemd[1]: var-lib-kubelet-pods-da51d102\x2dbea9\x2d4b73\x2db920\x2dd878067a1d60-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4xkhk.mount: Deactivated successfully. Sep 13 00:17:21.566009 systemd[1]: var-lib-kubelet-pods-da51d102\x2dbea9\x2d4b73\x2db920\x2dd878067a1d60-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:17:21.580782 kubelet[2541]: I0913 00:17:21.579959 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ed6bdef-b7b8-4a38-9a8e-375d7ba7c712-whisker-backend-key-pair\") pod \"whisker-6bc8bd7fbb-r6htj\" (UID: \"8ed6bdef-b7b8-4a38-9a8e-375d7ba7c712\") " pod="calico-system/whisker-6bc8bd7fbb-r6htj" Sep 13 00:17:21.580782 kubelet[2541]: I0913 00:17:21.580034 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ed6bdef-b7b8-4a38-9a8e-375d7ba7c712-whisker-ca-bundle\") pod \"whisker-6bc8bd7fbb-r6htj\" (UID: \"8ed6bdef-b7b8-4a38-9a8e-375d7ba7c712\") " pod="calico-system/whisker-6bc8bd7fbb-r6htj" Sep 13 00:17:21.580782 kubelet[2541]: I0913 00:17:21.580055 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlqkx\" (UniqueName: \"kubernetes.io/projected/8ed6bdef-b7b8-4a38-9a8e-375d7ba7c712-kube-api-access-rlqkx\") pod \"whisker-6bc8bd7fbb-r6htj\" (UID: \"8ed6bdef-b7b8-4a38-9a8e-375d7ba7c712\") " pod="calico-system/whisker-6bc8bd7fbb-r6htj" Sep 13 00:17:21.640320 containerd[1450]: 2025-09-13 00:17:21.600 [INFO][3937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Sep 13 00:17:21.640320 containerd[1450]: 2025-09-13 00:17:21.601 [INFO][3937] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" iface="eth0" netns="/var/run/netns/cni-9a9e1bbb-44b4-4b82-f784-f83185e802ab" Sep 13 00:17:21.640320 containerd[1450]: 2025-09-13 00:17:21.602 [INFO][3937] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" iface="eth0" netns="/var/run/netns/cni-9a9e1bbb-44b4-4b82-f784-f83185e802ab" Sep 13 00:17:21.640320 containerd[1450]: 2025-09-13 00:17:21.602 [INFO][3937] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" iface="eth0" netns="/var/run/netns/cni-9a9e1bbb-44b4-4b82-f784-f83185e802ab" Sep 13 00:17:21.640320 containerd[1450]: 2025-09-13 00:17:21.602 [INFO][3937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Sep 13 00:17:21.640320 containerd[1450]: 2025-09-13 00:17:21.602 [INFO][3937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Sep 13 00:17:21.640320 containerd[1450]: 2025-09-13 00:17:21.628 [INFO][3952] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" HandleID="k8s-pod-network.6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Workload="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:21.640320 containerd[1450]: 2025-09-13 00:17:21.628 [INFO][3952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:21.640320 containerd[1450]: 2025-09-13 00:17:21.628 [INFO][3952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:21.640320 containerd[1450]: 2025-09-13 00:17:21.633 [WARNING][3952] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" HandleID="k8s-pod-network.6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Workload="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:21.640320 containerd[1450]: 2025-09-13 00:17:21.634 [INFO][3952] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" HandleID="k8s-pod-network.6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Workload="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:21.640320 containerd[1450]: 2025-09-13 00:17:21.635 [INFO][3952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:21.640320 containerd[1450]: 2025-09-13 00:17:21.637 [INFO][3937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Sep 13 00:17:21.643046 containerd[1450]: time="2025-09-13T00:17:21.642939050Z" level=info msg="TearDown network for sandbox \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\" successfully" Sep 13 00:17:21.643046 containerd[1450]: time="2025-09-13T00:17:21.642977331Z" level=info msg="StopPodSandbox for \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\" returns successfully" Sep 13 00:17:21.643386 systemd[1]: run-netns-cni\x2d9a9e1bbb\x2d44b4\x2d4b82\x2df784\x2df83185e802ab.mount: Deactivated successfully. Sep 13 00:17:21.643937 containerd[1450]: time="2025-09-13T00:17:21.643903133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cb648bfc-dpqwh,Uid:f43bb6c2-de90-41a2-b02c-8c9fc7f0992c,Namespace:calico-system,Attempt:1,}" Sep 13 00:17:21.656342 containerd[1450]: 2025-09-13 00:17:21.601 [INFO][3935] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Sep 13 00:17:21.656342 containerd[1450]: 2025-09-13 00:17:21.601 [INFO][3935] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" iface="eth0" netns="/var/run/netns/cni-df4ed16b-1cdd-817d-a13d-3e7b64b2ac6f" Sep 13 00:17:21.656342 containerd[1450]: 2025-09-13 00:17:21.602 [INFO][3935] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" iface="eth0" netns="/var/run/netns/cni-df4ed16b-1cdd-817d-a13d-3e7b64b2ac6f" Sep 13 00:17:21.656342 containerd[1450]: 2025-09-13 00:17:21.602 [INFO][3935] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" iface="eth0" netns="/var/run/netns/cni-df4ed16b-1cdd-817d-a13d-3e7b64b2ac6f" Sep 13 00:17:21.656342 containerd[1450]: 2025-09-13 00:17:21.602 [INFO][3935] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Sep 13 00:17:21.656342 containerd[1450]: 2025-09-13 00:17:21.602 [INFO][3935] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Sep 13 00:17:21.656342 containerd[1450]: 2025-09-13 00:17:21.629 [INFO][3953] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" HandleID="k8s-pod-network.aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Workload="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:21.656342 containerd[1450]: 2025-09-13 00:17:21.629 [INFO][3953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:21.656342 containerd[1450]: 2025-09-13 00:17:21.635 [INFO][3953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:21.656342 containerd[1450]: 2025-09-13 00:17:21.646 [WARNING][3953] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" HandleID="k8s-pod-network.aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Workload="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:21.656342 containerd[1450]: 2025-09-13 00:17:21.646 [INFO][3953] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" HandleID="k8s-pod-network.aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Workload="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:21.656342 containerd[1450]: 2025-09-13 00:17:21.648 [INFO][3953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:21.656342 containerd[1450]: 2025-09-13 00:17:21.652 [INFO][3935] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Sep 13 00:17:21.656918 containerd[1450]: time="2025-09-13T00:17:21.656596036Z" level=info msg="TearDown network for sandbox \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\" successfully" Sep 13 00:17:21.656918 containerd[1450]: time="2025-09-13T00:17:21.656648913Z" level=info msg="StopPodSandbox for \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\" returns successfully" Sep 13 00:17:21.657868 containerd[1450]: time="2025-09-13T00:17:21.657798038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xz856,Uid:7935845b-1239-4bbe-bb1a-5deeeba20ac8,Namespace:calico-system,Attempt:1,}" Sep 13 00:17:21.660187 systemd[1]: run-netns-cni\x2ddf4ed16b\x2d1cdd\x2d817d\x2da13d\x2d3e7b64b2ac6f.mount: Deactivated successfully. Sep 13 00:17:21.816552 systemd-networkd[1384]: calibdc79c99898: Link UP Sep 13 00:17:21.819893 systemd-networkd[1384]: calibdc79c99898: Gained carrier Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.709 [INFO][3980] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.721 [INFO][3980] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--xz856-eth0 goldmane-54d579b49d- calico-system 7935845b-1239-4bbe-bb1a-5deeeba20ac8 965 0 2025-09-13 00:16:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-xz856 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibdc79c99898 [] [] }} ContainerID="7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" Namespace="calico-system" Pod="goldmane-54d579b49d-xz856" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xz856-" Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.721 [INFO][3980] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" Namespace="calico-system" Pod="goldmane-54d579b49d-xz856" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.758 [INFO][4004] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" HandleID="k8s-pod-network.7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" Workload="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.758 [INFO][4004] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" HandleID="k8s-pod-network.7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" Workload="localhost-k8s-goldmane--54d579b49d--xz856-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001397c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-xz856", "timestamp":"2025-09-13 00:17:21.758458895 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.758 [INFO][4004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.758 [INFO][4004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.759 [INFO][4004] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.768 [INFO][4004] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" host="localhost" Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.775 [INFO][4004] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.780 [INFO][4004] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.782 [INFO][4004] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.785 [INFO][4004] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.785 [INFO][4004] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" host="localhost" Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.788 [INFO][4004] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448 Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.794 [INFO][4004] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" host="localhost" Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.802 [INFO][4004] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" host="localhost" Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.802 [INFO][4004] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" host="localhost" Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.802 [INFO][4004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:21.835575 containerd[1450]: 2025-09-13 00:17:21.802 [INFO][4004] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" HandleID="k8s-pod-network.7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" Workload="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:21.836577 containerd[1450]: 2025-09-13 00:17:21.807 [INFO][3980] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" Namespace="calico-system" Pod="goldmane-54d579b49d-xz856" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xz856-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--xz856-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"7935845b-1239-4bbe-bb1a-5deeeba20ac8", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-xz856", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibdc79c99898", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:21.836577 containerd[1450]: 2025-09-13 00:17:21.807 [INFO][3980] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" Namespace="calico-system" Pod="goldmane-54d579b49d-xz856" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:21.836577 containerd[1450]: 2025-09-13 00:17:21.807 [INFO][3980] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdc79c99898 ContainerID="7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" Namespace="calico-system" Pod="goldmane-54d579b49d-xz856" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:21.836577 containerd[1450]: 2025-09-13 00:17:21.817 [INFO][3980] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" Namespace="calico-system" Pod="goldmane-54d579b49d-xz856" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:21.836577 containerd[1450]: 2025-09-13 00:17:21.818 [INFO][3980] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" Namespace="calico-system" Pod="goldmane-54d579b49d-xz856" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xz856-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--xz856-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"7935845b-1239-4bbe-bb1a-5deeeba20ac8", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448", Pod:"goldmane-54d579b49d-xz856", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibdc79c99898", MAC:"02:5c:f0:b5:cd:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:21.836577 containerd[1450]: 2025-09-13 00:17:21.832 [INFO][3980] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448" Namespace="calico-system" Pod="goldmane-54d579b49d-xz856" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:21.859227 containerd[1450]: time="2025-09-13T00:17:21.859021120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bc8bd7fbb-r6htj,Uid:8ed6bdef-b7b8-4a38-9a8e-375d7ba7c712,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:21.881604 containerd[1450]: time="2025-09-13T00:17:21.881062830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:21.882200 containerd[1450]: time="2025-09-13T00:17:21.882150061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:21.882297 containerd[1450]: time="2025-09-13T00:17:21.882211544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:21.883990 containerd[1450]: time="2025-09-13T00:17:21.882352295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:21.924150 systemd[1]: Started cri-containerd-7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448.scope - libcontainer container 7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448. Sep 13 00:17:21.927382 systemd-networkd[1384]: caliab42ba67869: Link UP Sep 13 00:17:21.928790 systemd-networkd[1384]: caliab42ba67869: Gained carrier Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.707 [INFO][3967] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.721 [INFO][3967] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0 calico-kube-controllers-64cb648bfc- calico-system f43bb6c2-de90-41a2-b02c-8c9fc7f0992c 966 0 2025-09-13 00:16:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64cb648bfc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-64cb648bfc-dpqwh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliab42ba67869 [] [] }} ContainerID="3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" Namespace="calico-system" Pod="calico-kube-controllers-64cb648bfc-dpqwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-" Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.721 [INFO][3967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" Namespace="calico-system" Pod="calico-kube-controllers-64cb648bfc-dpqwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.759 [INFO][3998] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" HandleID="k8s-pod-network.3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" Workload="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.759 [INFO][3998] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" HandleID="k8s-pod-network.3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" Workload="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c1ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-64cb648bfc-dpqwh", "timestamp":"2025-09-13 00:17:21.759536738 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.760 [INFO][3998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.802 [INFO][3998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.803 [INFO][3998] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.868 [INFO][3998] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" host="localhost" Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.875 [INFO][3998] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.883 [INFO][3998] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.886 [INFO][3998] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.890 [INFO][3998] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.891 [INFO][3998] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" host="localhost" Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.893 [INFO][3998] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.901 [INFO][3998] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" host="localhost" Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.919 [INFO][3998] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" host="localhost" Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.919 [INFO][3998] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" host="localhost" Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.919 [INFO][3998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:21.949789 containerd[1450]: 2025-09-13 00:17:21.919 [INFO][3998] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" HandleID="k8s-pod-network.3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" Workload="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:21.950535 containerd[1450]: 2025-09-13 00:17:21.923 [INFO][3967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" Namespace="calico-system" Pod="calico-kube-controllers-64cb648bfc-dpqwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0", GenerateName:"calico-kube-controllers-64cb648bfc-", Namespace:"calico-system", SelfLink:"", UID:"f43bb6c2-de90-41a2-b02c-8c9fc7f0992c", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64cb648bfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-64cb648bfc-dpqwh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab42ba67869", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:21.950535 containerd[1450]: 2025-09-13 00:17:21.924 [INFO][3967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" Namespace="calico-system" Pod="calico-kube-controllers-64cb648bfc-dpqwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:21.950535 containerd[1450]: 2025-09-13 00:17:21.924 [INFO][3967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab42ba67869 ContainerID="3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" Namespace="calico-system" Pod="calico-kube-controllers-64cb648bfc-dpqwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:21.950535 containerd[1450]: 2025-09-13 00:17:21.929 [INFO][3967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" Namespace="calico-system" Pod="calico-kube-controllers-64cb648bfc-dpqwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:21.950535 containerd[1450]: 2025-09-13 00:17:21.930 [INFO][3967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" Namespace="calico-system" Pod="calico-kube-controllers-64cb648bfc-dpqwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0", GenerateName:"calico-kube-controllers-64cb648bfc-", Namespace:"calico-system", SelfLink:"", UID:"f43bb6c2-de90-41a2-b02c-8c9fc7f0992c", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64cb648bfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe", Pod:"calico-kube-controllers-64cb648bfc-dpqwh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab42ba67869", MAC:"ba:17:f2:34:99:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:21.950535 containerd[1450]: 2025-09-13 00:17:21.946 [INFO][3967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe" Namespace="calico-system" Pod="calico-kube-controllers-64cb648bfc-dpqwh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:21.958689 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:22.006441 containerd[1450]: time="2025-09-13T00:17:22.006382195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xz856,Uid:7935845b-1239-4bbe-bb1a-5deeeba20ac8,Namespace:calico-system,Attempt:1,} returns sandbox id \"7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448\"" Sep 13 00:17:22.010305 containerd[1450]: time="2025-09-13T00:17:22.010243418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:17:22.019475 containerd[1450]: time="2025-09-13T00:17:22.019158243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:22.019475 containerd[1450]: time="2025-09-13T00:17:22.019225227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:22.019475 containerd[1450]: time="2025-09-13T00:17:22.019239814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:22.019475 containerd[1450]: time="2025-09-13T00:17:22.019366448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:22.042950 systemd[1]: Started cri-containerd-3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe.scope - libcontainer container 3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe. Sep 13 00:17:22.059458 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:22.088097 containerd[1450]: time="2025-09-13T00:17:22.087951622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cb648bfc-dpqwh,Uid:f43bb6c2-de90-41a2-b02c-8c9fc7f0992c,Namespace:calico-system,Attempt:1,} returns sandbox id \"3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe\"" Sep 13 00:17:22.292227 systemd-networkd[1384]: cali23efb374c46: Link UP Sep 13 00:17:22.294009 systemd-networkd[1384]: cali23efb374c46: Gained carrier Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:21.922 [INFO][4040] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:21.948 [INFO][4040] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6bc8bd7fbb--r6htj-eth0 whisker-6bc8bd7fbb- calico-system 8ed6bdef-b7b8-4a38-9a8e-375d7ba7c712 962 0 2025-09-13 00:17:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bc8bd7fbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6bc8bd7fbb-r6htj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali23efb374c46 [] [] }} ContainerID="55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" Namespace="calico-system" Pod="whisker-6bc8bd7fbb-r6htj" WorkloadEndpoint="localhost-k8s-whisker--6bc8bd7fbb--r6htj-" Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:21.948 [INFO][4040] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" Namespace="calico-system" Pod="whisker-6bc8bd7fbb-r6htj" WorkloadEndpoint="localhost-k8s-whisker--6bc8bd7fbb--r6htj-eth0" Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:21.994 [INFO][4076] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" HandleID="k8s-pod-network.55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" Workload="localhost-k8s-whisker--6bc8bd7fbb--r6htj-eth0" Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:21.995 [INFO][4076] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" HandleID="k8s-pod-network.55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" Workload="localhost-k8s-whisker--6bc8bd7fbb--r6htj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6bc8bd7fbb-r6htj", "timestamp":"2025-09-13 00:17:21.99468696 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:21.995 [INFO][4076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:21.995 [INFO][4076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:21.995 [INFO][4076] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:22.006 [INFO][4076] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" host="localhost" Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:22.242 [INFO][4076] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:22.253 [INFO][4076] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:22.256 [INFO][4076] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:22.258 [INFO][4076] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:22.259 [INFO][4076] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" host="localhost" Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:22.261 [INFO][4076] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6 Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:22.266 [INFO][4076] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" host="localhost" Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:22.274 [INFO][4076] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" host="localhost" Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:22.275 [INFO][4076] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" host="localhost" Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:22.275 [INFO][4076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:22.314910 containerd[1450]: 2025-09-13 00:17:22.275 [INFO][4076] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" HandleID="k8s-pod-network.55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" Workload="localhost-k8s-whisker--6bc8bd7fbb--r6htj-eth0" Sep 13 00:17:22.315582 containerd[1450]: 2025-09-13 00:17:22.281 [INFO][4040] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" Namespace="calico-system" Pod="whisker-6bc8bd7fbb-r6htj" WorkloadEndpoint="localhost-k8s-whisker--6bc8bd7fbb--r6htj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6bc8bd7fbb--r6htj-eth0", GenerateName:"whisker-6bc8bd7fbb-", Namespace:"calico-system", SelfLink:"", UID:"8ed6bdef-b7b8-4a38-9a8e-375d7ba7c712", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bc8bd7fbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6bc8bd7fbb-r6htj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali23efb374c46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:22.315582 containerd[1450]: 2025-09-13 00:17:22.281 [INFO][4040] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" Namespace="calico-system" Pod="whisker-6bc8bd7fbb-r6htj" WorkloadEndpoint="localhost-k8s-whisker--6bc8bd7fbb--r6htj-eth0" Sep 13 00:17:22.315582 containerd[1450]: 2025-09-13 00:17:22.281 [INFO][4040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23efb374c46 ContainerID="55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" Namespace="calico-system" Pod="whisker-6bc8bd7fbb-r6htj" WorkloadEndpoint="localhost-k8s-whisker--6bc8bd7fbb--r6htj-eth0" Sep 13 00:17:22.315582 containerd[1450]: 2025-09-13 00:17:22.293 [INFO][4040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" Namespace="calico-system" Pod="whisker-6bc8bd7fbb-r6htj" WorkloadEndpoint="localhost-k8s-whisker--6bc8bd7fbb--r6htj-eth0" Sep 13 00:17:22.315582 containerd[1450]: 2025-09-13 00:17:22.294 [INFO][4040] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" Namespace="calico-system" Pod="whisker-6bc8bd7fbb-r6htj" WorkloadEndpoint="localhost-k8s-whisker--6bc8bd7fbb--r6htj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6bc8bd7fbb--r6htj-eth0", GenerateName:"whisker-6bc8bd7fbb-", Namespace:"calico-system", SelfLink:"", UID:"8ed6bdef-b7b8-4a38-9a8e-375d7ba7c712", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bc8bd7fbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6", Pod:"whisker-6bc8bd7fbb-r6htj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali23efb374c46", MAC:"2e:2a:4a:23:f7:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:22.315582 containerd[1450]: 2025-09-13 00:17:22.309 [INFO][4040] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6" Namespace="calico-system" Pod="whisker-6bc8bd7fbb-r6htj" WorkloadEndpoint="localhost-k8s-whisker--6bc8bd7fbb--r6htj-eth0" Sep 13 00:17:22.418291 containerd[1450]: time="2025-09-13T00:17:22.417704136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:22.418291 containerd[1450]: time="2025-09-13T00:17:22.417821583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:22.418291 containerd[1450]: time="2025-09-13T00:17:22.417850917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:22.418291 containerd[1450]: time="2025-09-13T00:17:22.418016785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:22.446239 systemd[1]: Started cri-containerd-55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6.scope - libcontainer container 55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6. Sep 13 00:17:22.482667 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:22.516124 containerd[1450]: time="2025-09-13T00:17:22.515966286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bc8bd7fbb-r6htj,Uid:8ed6bdef-b7b8-4a38-9a8e-375d7ba7c712,Namespace:calico-system,Attempt:0,} returns sandbox id \"55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6\"" Sep 13 00:17:22.522190 containerd[1450]: time="2025-09-13T00:17:22.521783672Z" level=info msg="StopPodSandbox for \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\"" Sep 13 00:17:22.522190 containerd[1450]: time="2025-09-13T00:17:22.521976509Z" level=info msg="StopPodSandbox for \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\"" Sep 13 00:17:22.522827 kernel: bpftool[4310]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:17:22.523311 kubelet[2541]: I0913 00:17:22.523281 2541 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da51d102-bea9-4b73-b920-d878067a1d60" path="/var/lib/kubelet/pods/da51d102-bea9-4b73-b920-d878067a1d60/volumes" Sep 13 00:17:22.654998 containerd[1450]: 2025-09-13 00:17:22.593 [INFO][4334] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Sep 13 00:17:22.654998 containerd[1450]: 2025-09-13 00:17:22.593 [INFO][4334] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" iface="eth0" netns="/var/run/netns/cni-73626b15-e597-5e7e-5479-34df46b3da15" Sep 13 00:17:22.654998 containerd[1450]: 2025-09-13 00:17:22.593 [INFO][4334] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" iface="eth0" netns="/var/run/netns/cni-73626b15-e597-5e7e-5479-34df46b3da15" Sep 13 00:17:22.654998 containerd[1450]: 2025-09-13 00:17:22.594 [INFO][4334] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" iface="eth0" netns="/var/run/netns/cni-73626b15-e597-5e7e-5479-34df46b3da15" Sep 13 00:17:22.654998 containerd[1450]: 2025-09-13 00:17:22.594 [INFO][4334] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Sep 13 00:17:22.654998 containerd[1450]: 2025-09-13 00:17:22.594 [INFO][4334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Sep 13 00:17:22.654998 containerd[1450]: 2025-09-13 00:17:22.634 [INFO][4349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" HandleID="k8s-pod-network.fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Workload="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:22.654998 containerd[1450]: 2025-09-13 00:17:22.635 [INFO][4349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:22.654998 containerd[1450]: 2025-09-13 00:17:22.635 [INFO][4349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:22.654998 containerd[1450]: 2025-09-13 00:17:22.640 [WARNING][4349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" HandleID="k8s-pod-network.fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Workload="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:22.654998 containerd[1450]: 2025-09-13 00:17:22.640 [INFO][4349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" HandleID="k8s-pod-network.fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Workload="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:22.654998 containerd[1450]: 2025-09-13 00:17:22.643 [INFO][4349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:22.654998 containerd[1450]: 2025-09-13 00:17:22.646 [INFO][4334] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Sep 13 00:17:22.655779 containerd[1450]: time="2025-09-13T00:17:22.655336287Z" level=info msg="TearDown network for sandbox \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\" successfully" Sep 13 00:17:22.655779 containerd[1450]: time="2025-09-13T00:17:22.655374177Z" level=info msg="StopPodSandbox for \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\" returns successfully" Sep 13 00:17:22.657266 containerd[1450]: time="2025-09-13T00:17:22.657167910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k54nv,Uid:5a380725-fcdb-4d95-86f1-35ff10380123,Namespace:calico-system,Attempt:1,}" Sep 13 00:17:22.660205 systemd[1]: run-netns-cni\x2d73626b15\x2de597\x2d5e7e\x2d5479\x2d34df46b3da15.mount: Deactivated successfully. Sep 13 00:17:22.661773 containerd[1450]: 2025-09-13 00:17:22.604 [INFO][4333] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Sep 13 00:17:22.661773 containerd[1450]: 2025-09-13 00:17:22.605 [INFO][4333] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" iface="eth0" netns="/var/run/netns/cni-ec3564c7-2bdb-dee6-f4e6-74bde3394eca" Sep 13 00:17:22.661773 containerd[1450]: 2025-09-13 00:17:22.606 [INFO][4333] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" iface="eth0" netns="/var/run/netns/cni-ec3564c7-2bdb-dee6-f4e6-74bde3394eca" Sep 13 00:17:22.661773 containerd[1450]: 2025-09-13 00:17:22.607 [INFO][4333] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" iface="eth0" netns="/var/run/netns/cni-ec3564c7-2bdb-dee6-f4e6-74bde3394eca" Sep 13 00:17:22.661773 containerd[1450]: 2025-09-13 00:17:22.607 [INFO][4333] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Sep 13 00:17:22.661773 containerd[1450]: 2025-09-13 00:17:22.607 [INFO][4333] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Sep 13 00:17:22.661773 containerd[1450]: 2025-09-13 00:17:22.644 [INFO][4355] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" HandleID="k8s-pod-network.ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:22.661773 containerd[1450]: 2025-09-13 00:17:22.645 [INFO][4355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:22.661773 containerd[1450]: 2025-09-13 00:17:22.645 [INFO][4355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:22.661773 containerd[1450]: 2025-09-13 00:17:22.651 [WARNING][4355] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" HandleID="k8s-pod-network.ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:22.661773 containerd[1450]: 2025-09-13 00:17:22.651 [INFO][4355] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" HandleID="k8s-pod-network.ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:22.661773 containerd[1450]: 2025-09-13 00:17:22.653 [INFO][4355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:22.661773 containerd[1450]: 2025-09-13 00:17:22.657 [INFO][4333] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Sep 13 00:17:22.662422 containerd[1450]: time="2025-09-13T00:17:22.662019758Z" level=info msg="TearDown network for sandbox \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\" successfully" Sep 13 00:17:22.662422 containerd[1450]: time="2025-09-13T00:17:22.662048431Z" level=info msg="StopPodSandbox for \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\" returns successfully" Sep 13 00:17:22.664565 systemd[1]: run-netns-cni\x2dec3564c7\x2d2bdb\x2ddee6\x2df4e6\x2d74bde3394eca.mount: Deactivated successfully. Sep 13 00:17:22.665679 containerd[1450]: time="2025-09-13T00:17:22.665620889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4b4787d4-c79sh,Uid:ea66517c-8704-46d2-93ab-a6f19d47926d,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:17:22.935846 systemd-networkd[1384]: vxlan.calico: Link UP Sep 13 00:17:22.935859 systemd-networkd[1384]: vxlan.calico: Gained carrier Sep 13 00:17:22.960037 systemd-networkd[1384]: cali16d3fc79384: Link UP Sep 13 00:17:22.964502 systemd-networkd[1384]: cali16d3fc79384: Gained carrier Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.830 [INFO][4366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--k54nv-eth0 csi-node-driver- calico-system 5a380725-fcdb-4d95-86f1-35ff10380123 985 0 2025-09-13 00:16:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-k54nv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali16d3fc79384 [] [] }} ContainerID="a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" Namespace="calico-system" Pod="csi-node-driver-k54nv" WorkloadEndpoint="localhost-k8s-csi--node--driver--k54nv-" Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.831 [INFO][4366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" Namespace="calico-system" Pod="csi-node-driver-k54nv" WorkloadEndpoint="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.881 [INFO][4405] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" HandleID="k8s-pod-network.a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" Workload="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.881 [INFO][4405] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" HandleID="k8s-pod-network.a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" Workload="localhost-k8s-csi--node--driver--k54nv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5f40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-k54nv", "timestamp":"2025-09-13 00:17:22.881305914 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.881 [INFO][4405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.881 [INFO][4405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.881 [INFO][4405] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.892 [INFO][4405] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" host="localhost" Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.905 [INFO][4405] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.915 [INFO][4405] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.919 [INFO][4405] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.925 [INFO][4405] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.925 [INFO][4405] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" host="localhost" Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.928 [INFO][4405] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2 Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.935 [INFO][4405] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" host="localhost" Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.948 [INFO][4405] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" host="localhost" Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.949 [INFO][4405] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" host="localhost" Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.949 [INFO][4405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:22.996993 containerd[1450]: 2025-09-13 00:17:22.949 [INFO][4405] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" HandleID="k8s-pod-network.a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" Workload="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:22.997946 containerd[1450]: 2025-09-13 00:17:22.953 [INFO][4366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" Namespace="calico-system" Pod="csi-node-driver-k54nv" WorkloadEndpoint="localhost-k8s-csi--node--driver--k54nv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k54nv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a380725-fcdb-4d95-86f1-35ff10380123", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-k54nv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16d3fc79384", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:22.997946 containerd[1450]: 2025-09-13 00:17:22.953 [INFO][4366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" Namespace="calico-system" Pod="csi-node-driver-k54nv" WorkloadEndpoint="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:22.997946 containerd[1450]: 2025-09-13 00:17:22.953 [INFO][4366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16d3fc79384 ContainerID="a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" Namespace="calico-system" Pod="csi-node-driver-k54nv" WorkloadEndpoint="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:22.997946 containerd[1450]: 2025-09-13 00:17:22.964 [INFO][4366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" Namespace="calico-system" Pod="csi-node-driver-k54nv" WorkloadEndpoint="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:22.997946 containerd[1450]: 2025-09-13 00:17:22.969 [INFO][4366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" Namespace="calico-system" Pod="csi-node-driver-k54nv" WorkloadEndpoint="localhost-k8s-csi--node--driver--k54nv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k54nv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a380725-fcdb-4d95-86f1-35ff10380123", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2", Pod:"csi-node-driver-k54nv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16d3fc79384", MAC:"be:ad:6a:b0:eb:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:22.997946 containerd[1450]: 2025-09-13 00:17:22.985 [INFO][4366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2" Namespace="calico-system" Pod="csi-node-driver-k54nv" WorkloadEndpoint="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:23.062267 containerd[1450]: time="2025-09-13T00:17:23.062144820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:23.062267 containerd[1450]: time="2025-09-13T00:17:23.062211203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:23.062267 containerd[1450]: time="2025-09-13T00:17:23.062225299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:23.062504 containerd[1450]: time="2025-09-13T00:17:23.062336635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:23.088103 systemd[1]: Started cri-containerd-a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2.scope - libcontainer container a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2. Sep 13 00:17:23.114310 systemd-networkd[1384]: calic333d19b707: Link UP Sep 13 00:17:23.115839 systemd-networkd[1384]: calic333d19b707: Gained carrier Sep 13 00:17:23.117174 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:23.130538 systemd[1]: Started sshd@8-10.0.0.139:22-10.0.0.1:35342.service - OpenSSH per-connection server daemon (10.0.0.1:35342). Sep 13 00:17:23.162177 containerd[1450]: time="2025-09-13T00:17:23.162070785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k54nv,Uid:5a380725-fcdb-4d95-86f1-35ff10380123,Namespace:calico-system,Attempt:1,} returns sandbox id \"a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2\"" Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:22.864 [INFO][4381] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0 calico-apiserver-5b4b4787d4- calico-apiserver ea66517c-8704-46d2-93ab-a6f19d47926d 986 0 2025-09-13 00:16:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b4b4787d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b4b4787d4-c79sh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic333d19b707 [] [] }} ContainerID="37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-c79sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-" Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:22.864 [INFO][4381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-c79sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:22.914 [INFO][4415] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" HandleID="k8s-pod-network.37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:22.914 [INFO][4415] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" HandleID="k8s-pod-network.37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b4b4787d4-c79sh", "timestamp":"2025-09-13 00:17:22.914023293 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:22.914 [INFO][4415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:22.949 [INFO][4415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:22.949 [INFO][4415] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:22.992 [INFO][4415] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" host="localhost" Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:23.004 [INFO][4415] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:23.014 [INFO][4415] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:23.017 [INFO][4415] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:23.020 [INFO][4415] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:23.020 [INFO][4415] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" host="localhost" Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:23.023 [INFO][4415] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:23.036 [INFO][4415] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" host="localhost" Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:23.088 [INFO][4415] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" host="localhost" Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:23.089 [INFO][4415] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" host="localhost" Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:23.090 [INFO][4415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:23.191138 containerd[1450]: 2025-09-13 00:17:23.091 [INFO][4415] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" HandleID="k8s-pod-network.37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:23.191824 containerd[1450]: 2025-09-13 00:17:23.107 [INFO][4381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-c79sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0", GenerateName:"calico-apiserver-5b4b4787d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea66517c-8704-46d2-93ab-a6f19d47926d", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4b4787d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b4b4787d4-c79sh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic333d19b707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:23.191824 containerd[1450]: 2025-09-13 00:17:23.107 [INFO][4381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-c79sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:23.191824 containerd[1450]: 2025-09-13 00:17:23.107 [INFO][4381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic333d19b707 ContainerID="37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-c79sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:23.191824 containerd[1450]: 2025-09-13 00:17:23.115 [INFO][4381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-c79sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:23.191824 containerd[1450]: 2025-09-13 00:17:23.136 [INFO][4381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-c79sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0", GenerateName:"calico-apiserver-5b4b4787d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea66517c-8704-46d2-93ab-a6f19d47926d", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4b4787d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b", Pod:"calico-apiserver-5b4b4787d4-c79sh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic333d19b707", MAC:"2e:c9:be:66:b1:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:23.191824 containerd[1450]: 2025-09-13 00:17:23.168 [INFO][4381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-c79sh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:23.199379 sshd[4490]: Accepted publickey for core from 10.0.0.1 port 35342 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:17:23.202461 sshd[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:23.214078 systemd-logind[1432]: New session 9 of user core. Sep 13 00:17:23.220609 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:17:23.231273 containerd[1450]: time="2025-09-13T00:17:23.230059227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:23.231273 containerd[1450]: time="2025-09-13T00:17:23.230287751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:23.231273 containerd[1450]: time="2025-09-13T00:17:23.230373059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:23.231273 containerd[1450]: time="2025-09-13T00:17:23.230862087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:23.257218 systemd[1]: Started cri-containerd-37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b.scope - libcontainer container 37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b. Sep 13 00:17:23.278913 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:23.320865 containerd[1450]: time="2025-09-13T00:17:23.320530589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4b4787d4-c79sh,Uid:ea66517c-8704-46d2-93ab-a6f19d47926d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b\"" Sep 13 00:17:23.355007 systemd-networkd[1384]: calibdc79c99898: Gained IPv6LL Sep 13 00:17:23.419173 sshd[4490]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:23.429612 systemd[1]: sshd@8-10.0.0.139:22-10.0.0.1:35342.service: Deactivated successfully. Sep 13 00:17:23.430060 systemd-logind[1432]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:17:23.433731 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:17:23.435736 systemd-logind[1432]: Removed session 9. Sep 13 00:17:23.483500 systemd-networkd[1384]: caliab42ba67869: Gained IPv6LL Sep 13 00:17:23.930010 systemd-networkd[1384]: cali23efb374c46: Gained IPv6LL Sep 13 00:17:24.521113 containerd[1450]: time="2025-09-13T00:17:24.520668747Z" level=info msg="StopPodSandbox for \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\"" Sep 13 00:17:24.521113 containerd[1450]: time="2025-09-13T00:17:24.520721705Z" level=info msg="StopPodSandbox for \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\"" Sep 13 00:17:24.521730 containerd[1450]: time="2025-09-13T00:17:24.521371272Z" level=info msg="StopPodSandbox for \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\"" Sep 13 00:17:24.698030 systemd-networkd[1384]: cali16d3fc79384: Gained IPv6LL Sep 13 00:17:24.788188 containerd[1450]: 2025-09-13 00:17:24.724 [INFO][4643] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Sep 13 00:17:24.788188 containerd[1450]: 2025-09-13 00:17:24.724 [INFO][4643] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" iface="eth0" netns="/var/run/netns/cni-3abf9964-ab36-5341-60c0-6bfb16bc8d4a" Sep 13 00:17:24.788188 containerd[1450]: 2025-09-13 00:17:24.724 [INFO][4643] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" iface="eth0" netns="/var/run/netns/cni-3abf9964-ab36-5341-60c0-6bfb16bc8d4a" Sep 13 00:17:24.788188 containerd[1450]: 2025-09-13 00:17:24.724 [INFO][4643] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" iface="eth0" netns="/var/run/netns/cni-3abf9964-ab36-5341-60c0-6bfb16bc8d4a" Sep 13 00:17:24.788188 containerd[1450]: 2025-09-13 00:17:24.724 [INFO][4643] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Sep 13 00:17:24.788188 containerd[1450]: 2025-09-13 00:17:24.724 [INFO][4643] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Sep 13 00:17:24.788188 containerd[1450]: 2025-09-13 00:17:24.768 [INFO][4661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" HandleID="k8s-pod-network.b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:24.788188 containerd[1450]: 2025-09-13 00:17:24.769 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:24.788188 containerd[1450]: 2025-09-13 00:17:24.769 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:24.788188 containerd[1450]: 2025-09-13 00:17:24.778 [WARNING][4661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" HandleID="k8s-pod-network.b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:24.788188 containerd[1450]: 2025-09-13 00:17:24.778 [INFO][4661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" HandleID="k8s-pod-network.b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:24.788188 containerd[1450]: 2025-09-13 00:17:24.780 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:24.788188 containerd[1450]: 2025-09-13 00:17:24.784 [INFO][4643] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Sep 13 00:17:24.791066 containerd[1450]: time="2025-09-13T00:17:24.791014902Z" level=info msg="TearDown network for sandbox \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\" successfully" Sep 13 00:17:24.791066 containerd[1450]: time="2025-09-13T00:17:24.791064363Z" level=info msg="StopPodSandbox for \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\" returns successfully" Sep 13 00:17:24.793944 containerd[1450]: time="2025-09-13T00:17:24.793603093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4b4787d4-9z5h7,Uid:052be84e-22d7-4ee1-a92b-8fcfaea0b69e,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:17:24.792418 systemd[1]: run-netns-cni\x2d3abf9964\x2dab36\x2d5341\x2d60c0\x2d6bfb16bc8d4a.mount: Deactivated successfully. Sep 13 00:17:24.860135 containerd[1450]: 2025-09-13 00:17:24.736 [INFO][4631] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Sep 13 00:17:24.860135 containerd[1450]: 2025-09-13 00:17:24.737 [INFO][4631] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" iface="eth0" netns="/var/run/netns/cni-fb72b1df-eb60-5266-35e9-04cbef9519d1" Sep 13 00:17:24.860135 containerd[1450]: 2025-09-13 00:17:24.737 [INFO][4631] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" iface="eth0" netns="/var/run/netns/cni-fb72b1df-eb60-5266-35e9-04cbef9519d1" Sep 13 00:17:24.860135 containerd[1450]: 2025-09-13 00:17:24.737 [INFO][4631] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" iface="eth0" netns="/var/run/netns/cni-fb72b1df-eb60-5266-35e9-04cbef9519d1" Sep 13 00:17:24.860135 containerd[1450]: 2025-09-13 00:17:24.737 [INFO][4631] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Sep 13 00:17:24.860135 containerd[1450]: 2025-09-13 00:17:24.737 [INFO][4631] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Sep 13 00:17:24.860135 containerd[1450]: 2025-09-13 00:17:24.842 [INFO][4667] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" HandleID="k8s-pod-network.fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Workload="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:24.860135 containerd[1450]: 2025-09-13 00:17:24.843 [INFO][4667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:24.860135 containerd[1450]: 2025-09-13 00:17:24.843 [INFO][4667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:24.860135 containerd[1450]: 2025-09-13 00:17:24.850 [WARNING][4667] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" HandleID="k8s-pod-network.fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Workload="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:24.860135 containerd[1450]: 2025-09-13 00:17:24.850 [INFO][4667] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" HandleID="k8s-pod-network.fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Workload="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:24.860135 containerd[1450]: 2025-09-13 00:17:24.853 [INFO][4667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:24.860135 containerd[1450]: 2025-09-13 00:17:24.856 [INFO][4631] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Sep 13 00:17:24.866969 systemd[1]: run-netns-cni\x2dfb72b1df\x2deb60\x2d5266\x2d35e9\x2d04cbef9519d1.mount: Deactivated successfully. Sep 13 00:17:24.867965 containerd[1450]: time="2025-09-13T00:17:24.867919803Z" level=info msg="TearDown network for sandbox \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\" successfully" Sep 13 00:17:24.867965 containerd[1450]: time="2025-09-13T00:17:24.867962142Z" level=info msg="StopPodSandbox for \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\" returns successfully" Sep 13 00:17:24.868593 kubelet[2541]: E0913 00:17:24.868411 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:24.873783 containerd[1450]: 2025-09-13 00:17:24.734 [INFO][4632] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Sep 13 00:17:24.873783 containerd[1450]: 2025-09-13 00:17:24.734 [INFO][4632] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" iface="eth0" netns="/var/run/netns/cni-ae4b36d4-790c-f7cb-26e4-16c4d6827d17" Sep 13 00:17:24.873783 containerd[1450]: 2025-09-13 00:17:24.735 [INFO][4632] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" iface="eth0" netns="/var/run/netns/cni-ae4b36d4-790c-f7cb-26e4-16c4d6827d17" Sep 13 00:17:24.873783 containerd[1450]: 2025-09-13 00:17:24.744 [INFO][4632] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" iface="eth0" netns="/var/run/netns/cni-ae4b36d4-790c-f7cb-26e4-16c4d6827d17" Sep 13 00:17:24.873783 containerd[1450]: 2025-09-13 00:17:24.744 [INFO][4632] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Sep 13 00:17:24.873783 containerd[1450]: 2025-09-13 00:17:24.745 [INFO][4632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Sep 13 00:17:24.873783 containerd[1450]: 2025-09-13 00:17:24.848 [INFO][4671] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" HandleID="k8s-pod-network.4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Workload="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:24.873783 containerd[1450]: 2025-09-13 00:17:24.848 [INFO][4671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:24.873783 containerd[1450]: 2025-09-13 00:17:24.852 [INFO][4671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:24.873783 containerd[1450]: 2025-09-13 00:17:24.858 [WARNING][4671] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" HandleID="k8s-pod-network.4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Workload="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:24.873783 containerd[1450]: 2025-09-13 00:17:24.858 [INFO][4671] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" HandleID="k8s-pod-network.4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Workload="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:24.873783 containerd[1450]: 2025-09-13 00:17:24.861 [INFO][4671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:24.873783 containerd[1450]: 2025-09-13 00:17:24.868 [INFO][4632] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Sep 13 00:17:24.873783 containerd[1450]: time="2025-09-13T00:17:24.872283143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d7v2c,Uid:bd84b29b-9cc2-416f-b838-56415fe1dcf0,Namespace:kube-system,Attempt:1,}" Sep 13 00:17:24.873783 containerd[1450]: time="2025-09-13T00:17:24.873616452Z" level=info msg="TearDown network for sandbox \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\" successfully" Sep 13 00:17:24.873783 containerd[1450]: time="2025-09-13T00:17:24.873643752Z" level=info msg="StopPodSandbox for \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\" returns successfully" Sep 13 00:17:24.874535 containerd[1450]: time="2025-09-13T00:17:24.874414144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-22g6l,Uid:31ddf125-3a79-4bfa-9a91-12d895809873,Namespace:kube-system,Attempt:1,}" Sep 13 00:17:24.874577 kubelet[2541]: E0913 00:17:24.874081 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:24.878374 systemd[1]: run-netns-cni\x2dae4b36d4\x2d790c\x2df7cb\x2d26e4\x2d16c4d6827d17.mount: Deactivated successfully. Sep 13 00:17:24.890088 systemd-networkd[1384]: vxlan.calico: Gained IPv6LL Sep 13 00:17:25.019136 systemd-networkd[1384]: calic333d19b707: Gained IPv6LL Sep 13 00:17:25.077461 systemd-networkd[1384]: cali0ff2c41099f: Link UP Sep 13 00:17:25.079265 systemd-networkd[1384]: cali0ff2c41099f: Gained carrier Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:24.987 [INFO][4713] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--22g6l-eth0 coredns-668d6bf9bc- kube-system 31ddf125-3a79-4bfa-9a91-12d895809873 1010 0 2025-09-13 00:16:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-22g6l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0ff2c41099f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-22g6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--22g6l-" Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:24.987 [INFO][4713] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-22g6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.021 [INFO][4748] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" HandleID="k8s-pod-network.e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" Workload="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.021 [INFO][4748] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" HandleID="k8s-pod-network.e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" Workload="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001314f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-22g6l", "timestamp":"2025-09-13 00:17:25.021298445 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.022 [INFO][4748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.022 [INFO][4748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.022 [INFO][4748] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.029 [INFO][4748] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" host="localhost" Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.039 [INFO][4748] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.044 [INFO][4748] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.047 [INFO][4748] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.050 [INFO][4748] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.050 [INFO][4748] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" host="localhost" Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.053 [INFO][4748] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.057 [INFO][4748] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" host="localhost" Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.066 [INFO][4748] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" host="localhost" Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.066 [INFO][4748] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" host="localhost" Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.066 [INFO][4748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:25.105430 containerd[1450]: 2025-09-13 00:17:25.066 [INFO][4748] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" HandleID="k8s-pod-network.e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" Workload="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:25.106576 containerd[1450]: 2025-09-13 00:17:25.071 [INFO][4713] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-22g6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--22g6l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"31ddf125-3a79-4bfa-9a91-12d895809873", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-22g6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ff2c41099f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:25.106576 containerd[1450]: 2025-09-13 00:17:25.071 [INFO][4713] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-22g6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:25.106576 containerd[1450]: 2025-09-13 00:17:25.072 [INFO][4713] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ff2c41099f ContainerID="e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-22g6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:25.106576 containerd[1450]: 2025-09-13 00:17:25.081 [INFO][4713] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-22g6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:25.106576 containerd[1450]: 2025-09-13 00:17:25.084 [INFO][4713] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-22g6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--22g6l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"31ddf125-3a79-4bfa-9a91-12d895809873", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e", Pod:"coredns-668d6bf9bc-22g6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ff2c41099f", MAC:"b2:17:6c:ce:5d:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:25.106576 containerd[1450]: 2025-09-13 00:17:25.101 [INFO][4713] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-22g6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:25.190435 containerd[1450]: time="2025-09-13T00:17:25.190251742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:25.192024 containerd[1450]: time="2025-09-13T00:17:25.191790386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:25.192024 containerd[1450]: time="2025-09-13T00:17:25.191903988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:25.193098 containerd[1450]: time="2025-09-13T00:17:25.193038590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:25.218195 systemd-networkd[1384]: cali7faad82068f: Link UP Sep 13 00:17:25.223262 systemd-networkd[1384]: cali7faad82068f: Gained carrier Sep 13 00:17:25.224196 systemd[1]: Started cri-containerd-e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e.scope - libcontainer container e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e. Sep 13 00:17:25.253773 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:24.981 [INFO][4701] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0 calico-apiserver-5b4b4787d4- calico-apiserver 052be84e-22d7-4ee1-a92b-8fcfaea0b69e 1008 0 2025-09-13 00:16:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b4b4787d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b4b4787d4-9z5h7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7faad82068f [] [] }} ContainerID="4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-9z5h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-" Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:24.981 [INFO][4701] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-9z5h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.025 [INFO][4738] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" HandleID="k8s-pod-network.4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.025 [INFO][4738] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" HandleID="k8s-pod-network.4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e4d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b4b4787d4-9z5h7", "timestamp":"2025-09-13 00:17:25.025060027 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.025 [INFO][4738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.067 [INFO][4738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.067 [INFO][4738] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.131 [INFO][4738] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" host="localhost" Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.140 [INFO][4738] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.176 [INFO][4738] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.182 [INFO][4738] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.188 [INFO][4738] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.188 [INFO][4738] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" host="localhost" Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.191 [INFO][4738] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.196 [INFO][4738] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" host="localhost" Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.204 [INFO][4738] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" host="localhost" Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.204 [INFO][4738] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" host="localhost" Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.204 [INFO][4738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:25.257888 containerd[1450]: 2025-09-13 00:17:25.204 [INFO][4738] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" HandleID="k8s-pod-network.4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:25.258648 containerd[1450]: 2025-09-13 00:17:25.211 [INFO][4701] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-9z5h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0", GenerateName:"calico-apiserver-5b4b4787d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"052be84e-22d7-4ee1-a92b-8fcfaea0b69e", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4b4787d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b4b4787d4-9z5h7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7faad82068f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:25.258648 containerd[1450]: 2025-09-13 00:17:25.211 [INFO][4701] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-9z5h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:25.258648 containerd[1450]: 2025-09-13 00:17:25.211 [INFO][4701] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7faad82068f ContainerID="4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-9z5h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:25.258648 containerd[1450]: 2025-09-13 00:17:25.225 [INFO][4701] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-9z5h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:25.258648 containerd[1450]: 2025-09-13 00:17:25.225 [INFO][4701] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-9z5h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0", GenerateName:"calico-apiserver-5b4b4787d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"052be84e-22d7-4ee1-a92b-8fcfaea0b69e", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4b4787d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c", Pod:"calico-apiserver-5b4b4787d4-9z5h7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7faad82068f", MAC:"f6:03:46:e7:10:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:25.258648 containerd[1450]: 2025-09-13 00:17:25.252 [INFO][4701] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4b4787d4-9z5h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:25.290923 containerd[1450]: time="2025-09-13T00:17:25.290697233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-22g6l,Uid:31ddf125-3a79-4bfa-9a91-12d895809873,Namespace:kube-system,Attempt:1,} returns sandbox id \"e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e\"" Sep 13 00:17:25.294143 kubelet[2541]: E0913 00:17:25.293521 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:25.298241 containerd[1450]: time="2025-09-13T00:17:25.295457245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:25.298241 containerd[1450]: time="2025-09-13T00:17:25.295531914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:25.298241 containerd[1450]: time="2025-09-13T00:17:25.295549878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:25.298241 containerd[1450]: time="2025-09-13T00:17:25.295682194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:25.322474 systemd-networkd[1384]: cali586f238d58d: Link UP Sep 13 00:17:25.325846 systemd[1]: Started cri-containerd-4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c.scope - libcontainer container 4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c. Sep 13 00:17:25.327646 systemd-networkd[1384]: cali586f238d58d: Gained carrier Sep 13 00:17:25.339351 containerd[1450]: time="2025-09-13T00:17:25.338945154Z" level=info msg="CreateContainer within sandbox \"e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:24.973 [INFO][4690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0 coredns-668d6bf9bc- kube-system bd84b29b-9cc2-416f-b838-56415fe1dcf0 1009 0 2025-09-13 00:16:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-d7v2c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali586f238d58d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7v2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7v2c-" Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:24.973 [INFO][4690] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7v2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.037 [INFO][4736] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" HandleID="k8s-pod-network.882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" Workload="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.038 [INFO][4736] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" HandleID="k8s-pod-network.882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" Workload="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-d7v2c", "timestamp":"2025-09-13 00:17:25.037036734 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.038 [INFO][4736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.204 [INFO][4736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.204 [INFO][4736] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.231 [INFO][4736] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" host="localhost" Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.242 [INFO][4736] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.283 [INFO][4736] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.286 [INFO][4736] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.290 [INFO][4736] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.291 [INFO][4736] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" host="localhost" Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.293 [INFO][4736] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.302 [INFO][4736] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" host="localhost" Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.309 [INFO][4736] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" host="localhost" Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.309 [INFO][4736] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" host="localhost" Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.310 [INFO][4736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:25.353716 containerd[1450]: 2025-09-13 00:17:25.310 [INFO][4736] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" HandleID="k8s-pod-network.882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" Workload="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:25.354632 containerd[1450]: 2025-09-13 00:17:25.318 [INFO][4690] cni-plugin/k8s.go 418: Populated endpoint ContainerID="882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7v2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd84b29b-9cc2-416f-b838-56415fe1dcf0", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-d7v2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali586f238d58d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:25.354632 containerd[1450]: 2025-09-13 00:17:25.318 [INFO][4690] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7v2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:25.354632 containerd[1450]: 2025-09-13 00:17:25.318 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali586f238d58d ContainerID="882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7v2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:25.354632 containerd[1450]: 2025-09-13 00:17:25.323 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7v2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:25.354632 containerd[1450]: 2025-09-13 00:17:25.324 [INFO][4690] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7v2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd84b29b-9cc2-416f-b838-56415fe1dcf0", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b", Pod:"coredns-668d6bf9bc-d7v2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali586f238d58d", MAC:"92:0e:20:de:b7:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:25.354632 containerd[1450]: 2025-09-13 00:17:25.339 [INFO][4690] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b" Namespace="kube-system" Pod="coredns-668d6bf9bc-d7v2c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:25.368517 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:25.376568 containerd[1450]: time="2025-09-13T00:17:25.376505397Z" level=info msg="CreateContainer within sandbox \"e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3fc0bb3e59be8166112adfbef90cb32a09a901f5620952a7270e88da3a9d53da\"" Sep 13 00:17:25.377637 containerd[1450]: time="2025-09-13T00:17:25.377587100Z" level=info msg="StartContainer for \"3fc0bb3e59be8166112adfbef90cb32a09a901f5620952a7270e88da3a9d53da\"" Sep 13 00:17:25.406212 containerd[1450]: time="2025-09-13T00:17:25.405474275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:25.406212 containerd[1450]: time="2025-09-13T00:17:25.405548904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:25.406212 containerd[1450]: time="2025-09-13T00:17:25.405568661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:25.406212 containerd[1450]: time="2025-09-13T00:17:25.405676381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:25.434854 containerd[1450]: time="2025-09-13T00:17:25.434437134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4b4787d4-9z5h7,Uid:052be84e-22d7-4ee1-a92b-8fcfaea0b69e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c\"" Sep 13 00:17:25.441679 systemd[1]: Started cri-containerd-3fc0bb3e59be8166112adfbef90cb32a09a901f5620952a7270e88da3a9d53da.scope - libcontainer container 3fc0bb3e59be8166112adfbef90cb32a09a901f5620952a7270e88da3a9d53da. Sep 13 00:17:25.444869 systemd[1]: Started cri-containerd-882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b.scope - libcontainer container 882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b. Sep 13 00:17:25.479412 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:25.505586 containerd[1450]: time="2025-09-13T00:17:25.505515605Z" level=info msg="StartContainer for \"3fc0bb3e59be8166112adfbef90cb32a09a901f5620952a7270e88da3a9d53da\" returns successfully" Sep 13 00:17:25.527286 containerd[1450]: time="2025-09-13T00:17:25.527144159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d7v2c,Uid:bd84b29b-9cc2-416f-b838-56415fe1dcf0,Namespace:kube-system,Attempt:1,} returns sandbox id \"882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b\"" Sep 13 00:17:25.528070 kubelet[2541]: E0913 00:17:25.528044 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:25.533590 containerd[1450]: time="2025-09-13T00:17:25.533551848Z" level=info msg="CreateContainer within sandbox \"882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:17:25.561625 containerd[1450]: time="2025-09-13T00:17:25.561507510Z" level=info msg="CreateContainer within sandbox \"882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ca7309594d0b6aab09398740be6fbd80383afa4bd08552bc3342e7ad191a941\"" Sep 13 00:17:25.563367 containerd[1450]: time="2025-09-13T00:17:25.563313533Z" level=info msg="StartContainer for \"7ca7309594d0b6aab09398740be6fbd80383afa4bd08552bc3342e7ad191a941\"" Sep 13 00:17:25.599971 systemd[1]: Started cri-containerd-7ca7309594d0b6aab09398740be6fbd80383afa4bd08552bc3342e7ad191a941.scope - libcontainer container 7ca7309594d0b6aab09398740be6fbd80383afa4bd08552bc3342e7ad191a941. Sep 13 00:17:25.684335 containerd[1450]: time="2025-09-13T00:17:25.683859935Z" level=info msg="StartContainer for \"7ca7309594d0b6aab09398740be6fbd80383afa4bd08552bc3342e7ad191a941\" returns successfully" Sep 13 00:17:25.803996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1622294228.mount: Deactivated successfully. Sep 13 00:17:26.248378 containerd[1450]: time="2025-09-13T00:17:26.248317870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:26.249193 containerd[1450]: time="2025-09-13T00:17:26.249118683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:17:26.250416 containerd[1450]: time="2025-09-13T00:17:26.250379594Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:26.253276 containerd[1450]: time="2025-09-13T00:17:26.253217144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:26.254154 containerd[1450]: time="2025-09-13T00:17:26.254119737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.243827059s" Sep 13 00:17:26.254194 containerd[1450]: time="2025-09-13T00:17:26.254155704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:17:26.256381 containerd[1450]: time="2025-09-13T00:17:26.256343023Z" level=info msg="CreateContainer within sandbox \"7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:17:26.265550 containerd[1450]: time="2025-09-13T00:17:26.265506958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:17:26.275441 containerd[1450]: time="2025-09-13T00:17:26.275383594Z" level=info msg="CreateContainer within sandbox \"7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8fd13e039119ab8aca59298062b8755e7b6e9539908fdd2b20ea5bd792039680\"" Sep 13 00:17:26.276046 containerd[1450]: time="2025-09-13T00:17:26.275989162Z" level=info msg="StartContainer for \"8fd13e039119ab8aca59298062b8755e7b6e9539908fdd2b20ea5bd792039680\"" Sep 13 00:17:26.306002 systemd[1]: Started cri-containerd-8fd13e039119ab8aca59298062b8755e7b6e9539908fdd2b20ea5bd792039680.scope - libcontainer container 8fd13e039119ab8aca59298062b8755e7b6e9539908fdd2b20ea5bd792039680. Sep 13 00:17:26.453915 containerd[1450]: time="2025-09-13T00:17:26.453850782Z" level=info msg="StartContainer for \"8fd13e039119ab8aca59298062b8755e7b6e9539908fdd2b20ea5bd792039680\" returns successfully" Sep 13 00:17:26.472727 kubelet[2541]: E0913 00:17:26.472526 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:26.482311 kubelet[2541]: E0913 00:17:26.482264 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:26.490072 systemd-networkd[1384]: cali7faad82068f: Gained IPv6LL Sep 13 00:17:26.490512 systemd-networkd[1384]: cali0ff2c41099f: Gained IPv6LL Sep 13 00:17:26.792982 kubelet[2541]: I0913 00:17:26.792521 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d7v2c" podStartSLOduration=46.792498143 podStartE2EDuration="46.792498143s" podCreationTimestamp="2025-09-13 00:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:26.790907999 +0000 UTC m=+52.376510235" watchObservedRunningTime="2025-09-13 00:17:26.792498143 +0000 UTC m=+52.378100379" Sep 13 00:17:26.792982 kubelet[2541]: I0913 00:17:26.792724 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-22g6l" podStartSLOduration=46.792715408 podStartE2EDuration="46.792715408s" podCreationTimestamp="2025-09-13 00:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:26.488968839 +0000 UTC m=+52.074571065" watchObservedRunningTime="2025-09-13 00:17:26.792715408 +0000 UTC m=+52.378317644" Sep 13 00:17:27.322037 systemd-networkd[1384]: cali586f238d58d: Gained IPv6LL Sep 13 00:17:27.484354 kubelet[2541]: E0913 00:17:27.484300 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:27.485044 kubelet[2541]: E0913 00:17:27.484486 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:28.439768 systemd[1]: Started sshd@9-10.0.0.139:22-10.0.0.1:35348.service - OpenSSH per-connection server daemon (10.0.0.1:35348). Sep 13 00:17:28.488845 kubelet[2541]: E0913 00:17:28.486575 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:28.488845 kubelet[2541]: E0913 00:17:28.487708 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:29.476212 sshd[5094]: Accepted publickey for core from 10.0.0.1 port 35348 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:17:29.478486 sshd[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:29.484248 systemd-logind[1432]: New session 10 of user core. Sep 13 00:17:29.491972 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:17:29.650986 sshd[5094]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:29.658093 systemd[1]: sshd@9-10.0.0.139:22-10.0.0.1:35348.service: Deactivated successfully. Sep 13 00:17:29.660399 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:17:29.661892 systemd-logind[1432]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:17:29.663171 systemd-logind[1432]: Removed session 10. Sep 13 00:17:30.980613 containerd[1450]: time="2025-09-13T00:17:30.980541557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:30.982918 containerd[1450]: time="2025-09-13T00:17:30.982728848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:17:30.985628 containerd[1450]: time="2025-09-13T00:17:30.985565638Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:30.989972 containerd[1450]: time="2025-09-13T00:17:30.989908019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:30.990759 containerd[1450]: time="2025-09-13T00:17:30.990691999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.725144646s" Sep 13 00:17:30.990759 containerd[1450]: time="2025-09-13T00:17:30.990741262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:17:30.994222 containerd[1450]: time="2025-09-13T00:17:30.994154241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:17:31.002110 containerd[1450]: time="2025-09-13T00:17:31.002043907Z" level=info msg="CreateContainer within sandbox \"3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:17:31.042473 containerd[1450]: time="2025-09-13T00:17:31.042246256Z" level=info msg="CreateContainer within sandbox \"3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"94df7d2c06e0d0c3794d5a30c056dec6b130367ba079a9dc68efb74aca671926\"" Sep 13 00:17:31.044456 containerd[1450]: time="2025-09-13T00:17:31.043266182Z" level=info msg="StartContainer for \"94df7d2c06e0d0c3794d5a30c056dec6b130367ba079a9dc68efb74aca671926\"" Sep 13 00:17:31.090192 systemd[1]: Started cri-containerd-94df7d2c06e0d0c3794d5a30c056dec6b130367ba079a9dc68efb74aca671926.scope - libcontainer container 94df7d2c06e0d0c3794d5a30c056dec6b130367ba079a9dc68efb74aca671926. Sep 13 00:17:31.155964 containerd[1450]: time="2025-09-13T00:17:31.155896351Z" level=info msg="StartContainer for \"94df7d2c06e0d0c3794d5a30c056dec6b130367ba079a9dc68efb74aca671926\" returns successfully" Sep 13 00:17:31.545696 kubelet[2541]: I0913 00:17:31.543578 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-xz856" podStartSLOduration=32.297721375 podStartE2EDuration="36.543549821s" podCreationTimestamp="2025-09-13 00:16:55 +0000 UTC" firstStartedPulling="2025-09-13 00:17:22.009092526 +0000 UTC m=+47.594694772" lastFinishedPulling="2025-09-13 00:17:26.254920982 +0000 UTC m=+51.840523218" observedRunningTime="2025-09-13 00:17:27.709946308 +0000 UTC m=+53.295548544" watchObservedRunningTime="2025-09-13 00:17:31.543549821 +0000 UTC m=+57.129152057" Sep 13 00:17:31.545696 kubelet[2541]: I0913 00:17:31.544017 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64cb648bfc-dpqwh" podStartSLOduration=26.641210752 podStartE2EDuration="35.544006348s" podCreationTimestamp="2025-09-13 00:16:56 +0000 UTC" firstStartedPulling="2025-09-13 00:17:22.089962868 +0000 UTC m=+47.675565114" lastFinishedPulling="2025-09-13 00:17:30.992758474 +0000 UTC m=+56.578360710" observedRunningTime="2025-09-13 00:17:31.543843632 +0000 UTC m=+57.129446008" watchObservedRunningTime="2025-09-13 00:17:31.544006348 +0000 UTC m=+57.129608584" Sep 13 00:17:33.160129 containerd[1450]: time="2025-09-13T00:17:33.160041512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:33.161212 containerd[1450]: time="2025-09-13T00:17:33.161152253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:17:33.164395 containerd[1450]: time="2025-09-13T00:17:33.164284357Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:33.168756 containerd[1450]: time="2025-09-13T00:17:33.168671345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:33.169842 containerd[1450]: time="2025-09-13T00:17:33.169719808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.175500153s" Sep 13 00:17:33.169842 containerd[1450]: time="2025-09-13T00:17:33.169787305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:17:33.172233 containerd[1450]: time="2025-09-13T00:17:33.172167906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:17:33.173021 containerd[1450]: time="2025-09-13T00:17:33.172968162Z" level=info msg="CreateContainer within sandbox \"55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:17:33.213767 containerd[1450]: time="2025-09-13T00:17:33.213713890Z" level=info msg="CreateContainer within sandbox \"55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0da890a9bb071b6dfa73465902f2d08e1158f84544d49869b84452aa511d99da\"" Sep 13 00:17:33.214391 containerd[1450]: time="2025-09-13T00:17:33.214347293Z" level=info msg="StartContainer for \"0da890a9bb071b6dfa73465902f2d08e1158f84544d49869b84452aa511d99da\"" Sep 13 00:17:33.260074 systemd[1]: Started cri-containerd-0da890a9bb071b6dfa73465902f2d08e1158f84544d49869b84452aa511d99da.scope - libcontainer container 0da890a9bb071b6dfa73465902f2d08e1158f84544d49869b84452aa511d99da. Sep 13 00:17:33.306085 containerd[1450]: time="2025-09-13T00:17:33.306002228Z" level=info msg="StartContainer for \"0da890a9bb071b6dfa73465902f2d08e1158f84544d49869b84452aa511d99da\" returns successfully" Sep 13 00:17:34.506403 containerd[1450]: time="2025-09-13T00:17:34.506050544Z" level=info msg="StopPodSandbox for \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\"" Sep 13 00:17:34.594615 containerd[1450]: 2025-09-13 00:17:34.548 [WARNING][5271] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd84b29b-9cc2-416f-b838-56415fe1dcf0", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b", Pod:"coredns-668d6bf9bc-d7v2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali586f238d58d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:34.594615 containerd[1450]: 2025-09-13 00:17:34.548 [INFO][5271] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Sep 13 00:17:34.594615 containerd[1450]: 2025-09-13 00:17:34.548 [INFO][5271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" iface="eth0" netns="" Sep 13 00:17:34.594615 containerd[1450]: 2025-09-13 00:17:34.548 [INFO][5271] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Sep 13 00:17:34.594615 containerd[1450]: 2025-09-13 00:17:34.548 [INFO][5271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Sep 13 00:17:34.594615 containerd[1450]: 2025-09-13 00:17:34.576 [INFO][5282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" HandleID="k8s-pod-network.fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Workload="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:34.594615 containerd[1450]: 2025-09-13 00:17:34.576 [INFO][5282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:34.594615 containerd[1450]: 2025-09-13 00:17:34.576 [INFO][5282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:34.594615 containerd[1450]: 2025-09-13 00:17:34.583 [WARNING][5282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" HandleID="k8s-pod-network.fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Workload="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:34.594615 containerd[1450]: 2025-09-13 00:17:34.583 [INFO][5282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" HandleID="k8s-pod-network.fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Workload="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:34.594615 containerd[1450]: 2025-09-13 00:17:34.586 [INFO][5282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:34.594615 containerd[1450]: 2025-09-13 00:17:34.590 [INFO][5271] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Sep 13 00:17:34.600062 containerd[1450]: time="2025-09-13T00:17:34.594652110Z" level=info msg="TearDown network for sandbox \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\" successfully" Sep 13 00:17:34.600062 containerd[1450]: time="2025-09-13T00:17:34.594681024Z" level=info msg="StopPodSandbox for \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\" returns successfully" Sep 13 00:17:34.600062 containerd[1450]: time="2025-09-13T00:17:34.595399608Z" level=info msg="RemovePodSandbox for \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\"" Sep 13 00:17:34.600062 containerd[1450]: time="2025-09-13T00:17:34.597692679Z" level=info msg="Forcibly stopping sandbox \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\"" Sep 13 00:17:34.671225 systemd[1]: Started sshd@10-10.0.0.139:22-10.0.0.1:41652.service - OpenSSH per-connection server daemon (10.0.0.1:41652). Sep 13 00:17:34.677627 containerd[1450]: 2025-09-13 00:17:34.637 [WARNING][5300] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd84b29b-9cc2-416f-b838-56415fe1dcf0", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"882dd4602c8fe284e46d677d40a31cac574537beeff132d998496384c7e5383b", Pod:"coredns-668d6bf9bc-d7v2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali586f238d58d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:34.677627 containerd[1450]: 2025-09-13 00:17:34.637 [INFO][5300] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Sep 13 00:17:34.677627 containerd[1450]: 2025-09-13 00:17:34.637 [INFO][5300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" iface="eth0" netns="" Sep 13 00:17:34.677627 containerd[1450]: 2025-09-13 00:17:34.637 [INFO][5300] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Sep 13 00:17:34.677627 containerd[1450]: 2025-09-13 00:17:34.637 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Sep 13 00:17:34.677627 containerd[1450]: 2025-09-13 00:17:34.659 [INFO][5310] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" HandleID="k8s-pod-network.fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Workload="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:34.677627 containerd[1450]: 2025-09-13 00:17:34.660 [INFO][5310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:34.677627 containerd[1450]: 2025-09-13 00:17:34.660 [INFO][5310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:34.677627 containerd[1450]: 2025-09-13 00:17:34.670 [WARNING][5310] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" HandleID="k8s-pod-network.fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Workload="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:34.677627 containerd[1450]: 2025-09-13 00:17:34.670 [INFO][5310] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" HandleID="k8s-pod-network.fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Workload="localhost-k8s-coredns--668d6bf9bc--d7v2c-eth0" Sep 13 00:17:34.677627 containerd[1450]: 2025-09-13 00:17:34.672 [INFO][5310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:34.677627 containerd[1450]: 2025-09-13 00:17:34.674 [INFO][5300] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1" Sep 13 00:17:34.678049 containerd[1450]: time="2025-09-13T00:17:34.677693658Z" level=info msg="TearDown network for sandbox \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\" successfully" Sep 13 00:17:34.779604 sshd[5319]: Accepted publickey for core from 10.0.0.1 port 41652 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:17:34.781597 sshd[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:34.785952 systemd-logind[1432]: New session 11 of user core. Sep 13 00:17:34.792956 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:17:34.942021 sshd[5319]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:34.953869 systemd[1]: sshd@10-10.0.0.139:22-10.0.0.1:41652.service: Deactivated successfully. Sep 13 00:17:34.955892 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:17:34.957786 systemd-logind[1432]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:17:34.966130 systemd[1]: Started sshd@11-10.0.0.139:22-10.0.0.1:41666.service - OpenSSH per-connection server daemon (10.0.0.1:41666). Sep 13 00:17:34.967513 systemd-logind[1432]: Removed session 11. Sep 13 00:17:34.994050 sshd[5334]: Accepted publickey for core from 10.0.0.1 port 41666 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:17:34.995759 sshd[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:35.001024 systemd-logind[1432]: New session 12 of user core. Sep 13 00:17:35.008027 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:17:35.031754 containerd[1450]: time="2025-09-13T00:17:35.031600975Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:35.031754 containerd[1450]: time="2025-09-13T00:17:35.031690363Z" level=info msg="RemovePodSandbox \"fed6cbece88e1fad890af3322fba9d7af71f96ed6155b047a0c3926b72ebf6a1\" returns successfully" Sep 13 00:17:35.032538 containerd[1450]: time="2025-09-13T00:17:35.032499359Z" level=info msg="StopPodSandbox for \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\"" Sep 13 00:17:35.159641 containerd[1450]: 2025-09-13 00:17:35.070 [WARNING][5352] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0", GenerateName:"calico-apiserver-5b4b4787d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea66517c-8704-46d2-93ab-a6f19d47926d", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4b4787d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b", Pod:"calico-apiserver-5b4b4787d4-c79sh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic333d19b707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:35.159641 containerd[1450]: 2025-09-13 00:17:35.071 [INFO][5352] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Sep 13 00:17:35.159641 containerd[1450]: 2025-09-13 00:17:35.071 [INFO][5352] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" iface="eth0" netns="" Sep 13 00:17:35.159641 containerd[1450]: 2025-09-13 00:17:35.071 [INFO][5352] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Sep 13 00:17:35.159641 containerd[1450]: 2025-09-13 00:17:35.071 [INFO][5352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Sep 13 00:17:35.159641 containerd[1450]: 2025-09-13 00:17:35.120 [INFO][5365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" HandleID="k8s-pod-network.ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:35.159641 containerd[1450]: 2025-09-13 00:17:35.120 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:35.159641 containerd[1450]: 2025-09-13 00:17:35.121 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:35.159641 containerd[1450]: 2025-09-13 00:17:35.149 [WARNING][5365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" HandleID="k8s-pod-network.ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:35.159641 containerd[1450]: 2025-09-13 00:17:35.149 [INFO][5365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" HandleID="k8s-pod-network.ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:35.159641 containerd[1450]: 2025-09-13 00:17:35.153 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:35.159641 containerd[1450]: 2025-09-13 00:17:35.156 [INFO][5352] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Sep 13 00:17:35.167028 containerd[1450]: time="2025-09-13T00:17:35.159694881Z" level=info msg="TearDown network for sandbox \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\" successfully" Sep 13 00:17:35.167028 containerd[1450]: time="2025-09-13T00:17:35.159721762Z" level=info msg="StopPodSandbox for \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\" returns successfully" Sep 13 00:17:35.167028 containerd[1450]: time="2025-09-13T00:17:35.160296816Z" level=info msg="RemovePodSandbox for \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\"" Sep 13 00:17:35.167028 containerd[1450]: time="2025-09-13T00:17:35.160341792Z" level=info msg="Forcibly stopping sandbox \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\"" Sep 13 00:17:35.244534 containerd[1450]: time="2025-09-13T00:17:35.244408172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:17:35.250406 containerd[1450]: time="2025-09-13T00:17:35.250320615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:35.255721 containerd[1450]: time="2025-09-13T00:17:35.255583192Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:35.259973 containerd[1450]: time="2025-09-13T00:17:35.257063906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.084782147s" Sep 13 00:17:35.259973 containerd[1450]: time="2025-09-13T00:17:35.257124810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:17:35.259973 containerd[1450]: time="2025-09-13T00:17:35.258184159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:35.258222 sshd[5334]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:35.263575 containerd[1450]: time="2025-09-13T00:17:35.263207155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:17:35.271655 containerd[1450]: time="2025-09-13T00:17:35.270983826Z" level=info msg="CreateContainer within sandbox \"a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:17:35.275891 systemd[1]: sshd@11-10.0.0.139:22-10.0.0.1:41666.service: Deactivated successfully. Sep 13 00:17:35.280148 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:17:35.283793 systemd-logind[1432]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:17:35.296324 systemd[1]: Started sshd@12-10.0.0.139:22-10.0.0.1:41674.service - OpenSSH per-connection server daemon (10.0.0.1:41674). Sep 13 00:17:35.299524 systemd-logind[1432]: Removed session 12. Sep 13 00:17:35.323324 containerd[1450]: time="2025-09-13T00:17:35.323266977Z" level=info msg="CreateContainer within sandbox \"a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9842e17897c0a8c1ebc80605be24314398e2d52167a1f5614df0a26fad65e63b\"" Sep 13 00:17:35.325166 containerd[1450]: time="2025-09-13T00:17:35.324205257Z" level=info msg="StartContainer for \"9842e17897c0a8c1ebc80605be24314398e2d52167a1f5614df0a26fad65e63b\"" Sep 13 00:17:35.329687 containerd[1450]: 2025-09-13 00:17:35.242 [WARNING][5384] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0", GenerateName:"calico-apiserver-5b4b4787d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea66517c-8704-46d2-93ab-a6f19d47926d", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4b4787d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b", Pod:"calico-apiserver-5b4b4787d4-c79sh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic333d19b707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:35.329687 containerd[1450]: 2025-09-13 00:17:35.243 [INFO][5384] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Sep 13 00:17:35.329687 containerd[1450]: 2025-09-13 00:17:35.243 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" iface="eth0" netns="" Sep 13 00:17:35.329687 containerd[1450]: 2025-09-13 00:17:35.243 [INFO][5384] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Sep 13 00:17:35.329687 containerd[1450]: 2025-09-13 00:17:35.243 [INFO][5384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Sep 13 00:17:35.329687 containerd[1450]: 2025-09-13 00:17:35.308 [INFO][5394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" HandleID="k8s-pod-network.ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:35.329687 containerd[1450]: 2025-09-13 00:17:35.308 [INFO][5394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:35.329687 containerd[1450]: 2025-09-13 00:17:35.308 [INFO][5394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:35.329687 containerd[1450]: 2025-09-13 00:17:35.318 [WARNING][5394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" HandleID="k8s-pod-network.ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:35.329687 containerd[1450]: 2025-09-13 00:17:35.318 [INFO][5394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" HandleID="k8s-pod-network.ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--c79sh-eth0" Sep 13 00:17:35.329687 containerd[1450]: 2025-09-13 00:17:35.320 [INFO][5394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:35.329687 containerd[1450]: 2025-09-13 00:17:35.323 [INFO][5384] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b" Sep 13 00:17:35.329687 containerd[1450]: time="2025-09-13T00:17:35.328976287Z" level=info msg="TearDown network for sandbox \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\" successfully" Sep 13 00:17:35.335735 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 41674 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:17:35.345393 sshd[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:35.351940 systemd-logind[1432]: New session 13 of user core. Sep 13 00:17:35.361320 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:17:35.373946 systemd[1]: Started cri-containerd-9842e17897c0a8c1ebc80605be24314398e2d52167a1f5614df0a26fad65e63b.scope - libcontainer container 9842e17897c0a8c1ebc80605be24314398e2d52167a1f5614df0a26fad65e63b. Sep 13 00:17:35.552369 containerd[1450]: time="2025-09-13T00:17:35.551516877Z" level=info msg="StartContainer for \"9842e17897c0a8c1ebc80605be24314398e2d52167a1f5614df0a26fad65e63b\" returns successfully" Sep 13 00:17:35.642340 containerd[1450]: time="2025-09-13T00:17:35.641170848Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:35.643040 containerd[1450]: time="2025-09-13T00:17:35.642703810Z" level=info msg="RemovePodSandbox \"ad701296b69dcfb3343c2a6aef6966323289326a1a0d923aef2b51b32f5bbc4b\" returns successfully" Sep 13 00:17:35.643838 containerd[1450]: time="2025-09-13T00:17:35.643305674Z" level=info msg="StopPodSandbox for \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\"" Sep 13 00:17:35.659557 sshd[5403]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:35.663530 systemd[1]: sshd@12-10.0.0.139:22-10.0.0.1:41674.service: Deactivated successfully. Sep 13 00:17:35.666590 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:17:35.668933 systemd-logind[1432]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:17:35.671399 systemd-logind[1432]: Removed session 13. Sep 13 00:17:35.734515 containerd[1450]: 2025-09-13 00:17:35.694 [WARNING][5462] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--xz856-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"7935845b-1239-4bbe-bb1a-5deeeba20ac8", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448", Pod:"goldmane-54d579b49d-xz856", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibdc79c99898", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:35.734515 containerd[1450]: 2025-09-13 00:17:35.694 [INFO][5462] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Sep 13 00:17:35.734515 containerd[1450]: 2025-09-13 00:17:35.694 [INFO][5462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" iface="eth0" netns="" Sep 13 00:17:35.734515 containerd[1450]: 2025-09-13 00:17:35.694 [INFO][5462] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Sep 13 00:17:35.734515 containerd[1450]: 2025-09-13 00:17:35.694 [INFO][5462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Sep 13 00:17:35.734515 containerd[1450]: 2025-09-13 00:17:35.717 [INFO][5472] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" HandleID="k8s-pod-network.aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Workload="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:35.734515 containerd[1450]: 2025-09-13 00:17:35.717 [INFO][5472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:35.734515 containerd[1450]: 2025-09-13 00:17:35.718 [INFO][5472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:35.734515 containerd[1450]: 2025-09-13 00:17:35.726 [WARNING][5472] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" HandleID="k8s-pod-network.aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Workload="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:35.734515 containerd[1450]: 2025-09-13 00:17:35.726 [INFO][5472] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" HandleID="k8s-pod-network.aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Workload="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:35.734515 containerd[1450]: 2025-09-13 00:17:35.728 [INFO][5472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:35.734515 containerd[1450]: 2025-09-13 00:17:35.731 [INFO][5462] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Sep 13 00:17:35.735098 containerd[1450]: time="2025-09-13T00:17:35.734595501Z" level=info msg="TearDown network for sandbox \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\" successfully" Sep 13 00:17:35.735098 containerd[1450]: time="2025-09-13T00:17:35.734632000Z" level=info msg="StopPodSandbox for \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\" returns successfully" Sep 13 00:17:35.735323 containerd[1450]: time="2025-09-13T00:17:35.735288709Z" level=info msg="RemovePodSandbox for \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\"" Sep 13 00:17:35.735377 containerd[1450]: time="2025-09-13T00:17:35.735336028Z" level=info msg="Forcibly stopping sandbox \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\"" Sep 13 00:17:35.818672 containerd[1450]: 2025-09-13 00:17:35.774 [WARNING][5489] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--xz856-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"7935845b-1239-4bbe-bb1a-5deeeba20ac8", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7713e26e20f4d03f13269c1d12085da146b6af8beccfd95edfcaebcdbd968448", Pod:"goldmane-54d579b49d-xz856", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibdc79c99898", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:35.818672 containerd[1450]: 2025-09-13 00:17:35.775 [INFO][5489] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Sep 13 00:17:35.818672 containerd[1450]: 2025-09-13 00:17:35.775 [INFO][5489] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" iface="eth0" netns="" Sep 13 00:17:35.818672 containerd[1450]: 2025-09-13 00:17:35.775 [INFO][5489] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Sep 13 00:17:35.818672 containerd[1450]: 2025-09-13 00:17:35.775 [INFO][5489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Sep 13 00:17:35.818672 containerd[1450]: 2025-09-13 00:17:35.801 [INFO][5497] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" HandleID="k8s-pod-network.aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Workload="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:35.818672 containerd[1450]: 2025-09-13 00:17:35.802 [INFO][5497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:35.818672 containerd[1450]: 2025-09-13 00:17:35.802 [INFO][5497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:35.818672 containerd[1450]: 2025-09-13 00:17:35.810 [WARNING][5497] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" HandleID="k8s-pod-network.aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Workload="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:35.818672 containerd[1450]: 2025-09-13 00:17:35.810 [INFO][5497] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" HandleID="k8s-pod-network.aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Workload="localhost-k8s-goldmane--54d579b49d--xz856-eth0" Sep 13 00:17:35.818672 containerd[1450]: 2025-09-13 00:17:35.812 [INFO][5497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:35.818672 containerd[1450]: 2025-09-13 00:17:35.815 [INFO][5489] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4" Sep 13 00:17:35.818672 containerd[1450]: time="2025-09-13T00:17:35.818632937Z" level=info msg="TearDown network for sandbox \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\" successfully" Sep 13 00:17:35.829617 containerd[1450]: time="2025-09-13T00:17:35.829548629Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:35.829725 containerd[1450]: time="2025-09-13T00:17:35.829644179Z" level=info msg="RemovePodSandbox \"aec849948e4cc92fa609335742b7298b7d5a35b15bd485f9e9cbb776982055a4\" returns successfully" Sep 13 00:17:35.830303 containerd[1450]: time="2025-09-13T00:17:35.830264208Z" level=info msg="StopPodSandbox for \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\"" Sep 13 00:17:35.914703 containerd[1450]: 2025-09-13 00:17:35.872 [WARNING][5516] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--22g6l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"31ddf125-3a79-4bfa-9a91-12d895809873", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e", Pod:"coredns-668d6bf9bc-22g6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ff2c41099f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:35.914703 containerd[1450]: 2025-09-13 00:17:35.873 [INFO][5516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Sep 13 00:17:35.914703 containerd[1450]: 2025-09-13 00:17:35.873 [INFO][5516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" iface="eth0" netns="" Sep 13 00:17:35.914703 containerd[1450]: 2025-09-13 00:17:35.873 [INFO][5516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Sep 13 00:17:35.914703 containerd[1450]: 2025-09-13 00:17:35.873 [INFO][5516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Sep 13 00:17:35.914703 containerd[1450]: 2025-09-13 00:17:35.899 [INFO][5525] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" HandleID="k8s-pod-network.4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Workload="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:35.914703 containerd[1450]: 2025-09-13 00:17:35.899 [INFO][5525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:35.914703 containerd[1450]: 2025-09-13 00:17:35.899 [INFO][5525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:35.914703 containerd[1450]: 2025-09-13 00:17:35.906 [WARNING][5525] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" HandleID="k8s-pod-network.4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Workload="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:35.914703 containerd[1450]: 2025-09-13 00:17:35.906 [INFO][5525] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" HandleID="k8s-pod-network.4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Workload="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:35.914703 containerd[1450]: 2025-09-13 00:17:35.908 [INFO][5525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:35.914703 containerd[1450]: 2025-09-13 00:17:35.911 [INFO][5516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Sep 13 00:17:35.915370 containerd[1450]: time="2025-09-13T00:17:35.914761802Z" level=info msg="TearDown network for sandbox \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\" successfully" Sep 13 00:17:35.915370 containerd[1450]: time="2025-09-13T00:17:35.914790666Z" level=info msg="StopPodSandbox for \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\" returns successfully" Sep 13 00:17:35.915472 containerd[1450]: time="2025-09-13T00:17:35.915403061Z" level=info msg="RemovePodSandbox for \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\"" Sep 13 00:17:35.915472 containerd[1450]: time="2025-09-13T00:17:35.915431125Z" level=info msg="Forcibly stopping sandbox \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\"" Sep 13 00:17:36.006007 containerd[1450]: 2025-09-13 00:17:35.961 [WARNING][5543] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--22g6l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"31ddf125-3a79-4bfa-9a91-12d895809873", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e69c082a6020282cb887e3a244475a31200af341887d94502217dce62e767f0e", Pod:"coredns-668d6bf9bc-22g6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ff2c41099f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:36.006007 containerd[1450]: 2025-09-13 00:17:35.961 [INFO][5543] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Sep 13 00:17:36.006007 containerd[1450]: 2025-09-13 00:17:35.961 [INFO][5543] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" iface="eth0" netns="" Sep 13 00:17:36.006007 containerd[1450]: 2025-09-13 00:17:35.961 [INFO][5543] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Sep 13 00:17:36.006007 containerd[1450]: 2025-09-13 00:17:35.961 [INFO][5543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Sep 13 00:17:36.006007 containerd[1450]: 2025-09-13 00:17:35.987 [INFO][5552] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" HandleID="k8s-pod-network.4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Workload="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:36.006007 containerd[1450]: 2025-09-13 00:17:35.987 [INFO][5552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:36.006007 containerd[1450]: 2025-09-13 00:17:35.987 [INFO][5552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:36.006007 containerd[1450]: 2025-09-13 00:17:35.997 [WARNING][5552] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" HandleID="k8s-pod-network.4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Workload="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:36.006007 containerd[1450]: 2025-09-13 00:17:35.997 [INFO][5552] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" HandleID="k8s-pod-network.4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Workload="localhost-k8s-coredns--668d6bf9bc--22g6l-eth0" Sep 13 00:17:36.006007 containerd[1450]: 2025-09-13 00:17:35.999 [INFO][5552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:36.006007 containerd[1450]: 2025-09-13 00:17:36.002 [INFO][5543] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe" Sep 13 00:17:36.006887 containerd[1450]: time="2025-09-13T00:17:36.006087397Z" level=info msg="TearDown network for sandbox \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\" successfully" Sep 13 00:17:36.010638 containerd[1450]: time="2025-09-13T00:17:36.010589520Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:36.010725 containerd[1450]: time="2025-09-13T00:17:36.010656797Z" level=info msg="RemovePodSandbox \"4095f3268ca630b7f61f32f9258d3c3e95af4a9b474ec3e3ed14cfaf710a02fe\" returns successfully" Sep 13 00:17:36.011265 containerd[1450]: time="2025-09-13T00:17:36.011237864Z" level=info msg="StopPodSandbox for \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\"" Sep 13 00:17:36.098816 containerd[1450]: 2025-09-13 00:17:36.053 [WARNING][5571] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0", GenerateName:"calico-kube-controllers-64cb648bfc-", Namespace:"calico-system", SelfLink:"", UID:"f43bb6c2-de90-41a2-b02c-8c9fc7f0992c", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64cb648bfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe", Pod:"calico-kube-controllers-64cb648bfc-dpqwh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab42ba67869", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:36.098816 containerd[1450]: 2025-09-13 00:17:36.053 [INFO][5571] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Sep 13 00:17:36.098816 containerd[1450]: 2025-09-13 00:17:36.053 [INFO][5571] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" iface="eth0" netns="" Sep 13 00:17:36.098816 containerd[1450]: 2025-09-13 00:17:36.053 [INFO][5571] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Sep 13 00:17:36.098816 containerd[1450]: 2025-09-13 00:17:36.053 [INFO][5571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Sep 13 00:17:36.098816 containerd[1450]: 2025-09-13 00:17:36.083 [INFO][5579] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" HandleID="k8s-pod-network.6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Workload="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:36.098816 containerd[1450]: 2025-09-13 00:17:36.083 [INFO][5579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:36.098816 containerd[1450]: 2025-09-13 00:17:36.083 [INFO][5579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:36.098816 containerd[1450]: 2025-09-13 00:17:36.090 [WARNING][5579] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" HandleID="k8s-pod-network.6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Workload="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:36.098816 containerd[1450]: 2025-09-13 00:17:36.090 [INFO][5579] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" HandleID="k8s-pod-network.6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Workload="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:36.098816 containerd[1450]: 2025-09-13 00:17:36.092 [INFO][5579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:36.098816 containerd[1450]: 2025-09-13 00:17:36.095 [INFO][5571] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Sep 13 00:17:36.098816 containerd[1450]: time="2025-09-13T00:17:36.098764584Z" level=info msg="TearDown network for sandbox \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\" successfully" Sep 13 00:17:36.099263 containerd[1450]: time="2025-09-13T00:17:36.098823876Z" level=info msg="StopPodSandbox for \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\" returns successfully" Sep 13 00:17:36.099558 containerd[1450]: time="2025-09-13T00:17:36.099512717Z" level=info msg="RemovePodSandbox for \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\"" Sep 13 00:17:36.099610 containerd[1450]: time="2025-09-13T00:17:36.099563042Z" level=info msg="Forcibly stopping sandbox \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\"" Sep 13 00:17:36.179576 containerd[1450]: 2025-09-13 00:17:36.139 [WARNING][5597] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0", GenerateName:"calico-kube-controllers-64cb648bfc-", Namespace:"calico-system", SelfLink:"", UID:"f43bb6c2-de90-41a2-b02c-8c9fc7f0992c", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64cb648bfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3dfb055c18eff74cc4331a3580d2248cba6e7098c962df920da64bac946235fe", Pod:"calico-kube-controllers-64cb648bfc-dpqwh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab42ba67869", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:36.179576 containerd[1450]: 2025-09-13 00:17:36.139 [INFO][5597] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Sep 13 00:17:36.179576 containerd[1450]: 2025-09-13 00:17:36.139 [INFO][5597] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" iface="eth0" netns="" Sep 13 00:17:36.179576 containerd[1450]: 2025-09-13 00:17:36.139 [INFO][5597] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Sep 13 00:17:36.179576 containerd[1450]: 2025-09-13 00:17:36.139 [INFO][5597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Sep 13 00:17:36.179576 containerd[1450]: 2025-09-13 00:17:36.164 [INFO][5605] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" HandleID="k8s-pod-network.6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Workload="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:36.179576 containerd[1450]: 2025-09-13 00:17:36.164 [INFO][5605] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:36.179576 containerd[1450]: 2025-09-13 00:17:36.164 [INFO][5605] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:36.179576 containerd[1450]: 2025-09-13 00:17:36.171 [WARNING][5605] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" HandleID="k8s-pod-network.6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Workload="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:36.179576 containerd[1450]: 2025-09-13 00:17:36.171 [INFO][5605] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" HandleID="k8s-pod-network.6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Workload="localhost-k8s-calico--kube--controllers--64cb648bfc--dpqwh-eth0" Sep 13 00:17:36.179576 containerd[1450]: 2025-09-13 00:17:36.173 [INFO][5605] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:36.179576 containerd[1450]: 2025-09-13 00:17:36.176 [INFO][5597] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252" Sep 13 00:17:36.180228 containerd[1450]: time="2025-09-13T00:17:36.179624723Z" level=info msg="TearDown network for sandbox \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\" successfully" Sep 13 00:17:36.183956 containerd[1450]: time="2025-09-13T00:17:36.183874529Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:36.184045 containerd[1450]: time="2025-09-13T00:17:36.183975559Z" level=info msg="RemovePodSandbox \"6cf7ceee8043c6849b889c6168ce76c01d44890af88eb54ace6c65b3bf32c252\" returns successfully" Sep 13 00:17:36.184704 containerd[1450]: time="2025-09-13T00:17:36.184649913Z" level=info msg="StopPodSandbox for \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\"" Sep 13 00:17:36.264106 containerd[1450]: 2025-09-13 00:17:36.225 [WARNING][5625] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" WorkloadEndpoint="localhost-k8s-whisker--79b496d88c--swmdn-eth0" Sep 13 00:17:36.264106 containerd[1450]: 2025-09-13 00:17:36.226 [INFO][5625] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Sep 13 00:17:36.264106 containerd[1450]: 2025-09-13 00:17:36.226 [INFO][5625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" iface="eth0" netns="" Sep 13 00:17:36.264106 containerd[1450]: 2025-09-13 00:17:36.226 [INFO][5625] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Sep 13 00:17:36.264106 containerd[1450]: 2025-09-13 00:17:36.226 [INFO][5625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Sep 13 00:17:36.264106 containerd[1450]: 2025-09-13 00:17:36.250 [INFO][5634] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" HandleID="k8s-pod-network.6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Workload="localhost-k8s-whisker--79b496d88c--swmdn-eth0" Sep 13 00:17:36.264106 containerd[1450]: 2025-09-13 00:17:36.250 [INFO][5634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:36.264106 containerd[1450]: 2025-09-13 00:17:36.250 [INFO][5634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:36.264106 containerd[1450]: 2025-09-13 00:17:36.256 [WARNING][5634] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" HandleID="k8s-pod-network.6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Workload="localhost-k8s-whisker--79b496d88c--swmdn-eth0" Sep 13 00:17:36.264106 containerd[1450]: 2025-09-13 00:17:36.256 [INFO][5634] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" HandleID="k8s-pod-network.6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Workload="localhost-k8s-whisker--79b496d88c--swmdn-eth0" Sep 13 00:17:36.264106 containerd[1450]: 2025-09-13 00:17:36.258 [INFO][5634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:36.264106 containerd[1450]: 2025-09-13 00:17:36.261 [INFO][5625] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Sep 13 00:17:36.264596 containerd[1450]: time="2025-09-13T00:17:36.264164481Z" level=info msg="TearDown network for sandbox \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\" successfully" Sep 13 00:17:36.264596 containerd[1450]: time="2025-09-13T00:17:36.264197584Z" level=info msg="StopPodSandbox for \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\" returns successfully" Sep 13 00:17:36.264966 containerd[1450]: time="2025-09-13T00:17:36.264901574Z" level=info msg="RemovePodSandbox for \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\"" Sep 13 00:17:36.265032 containerd[1450]: time="2025-09-13T00:17:36.264970754Z" level=info msg="Forcibly stopping sandbox \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\"" Sep 13 00:17:36.345101 containerd[1450]: 2025-09-13 00:17:36.300 [WARNING][5652] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" WorkloadEndpoint="localhost-k8s-whisker--79b496d88c--swmdn-eth0" Sep 13 00:17:36.345101 containerd[1450]: 2025-09-13 00:17:36.301 [INFO][5652] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Sep 13 00:17:36.345101 containerd[1450]: 2025-09-13 00:17:36.301 [INFO][5652] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" iface="eth0" netns="" Sep 13 00:17:36.345101 containerd[1450]: 2025-09-13 00:17:36.301 [INFO][5652] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Sep 13 00:17:36.345101 containerd[1450]: 2025-09-13 00:17:36.301 [INFO][5652] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Sep 13 00:17:36.345101 containerd[1450]: 2025-09-13 00:17:36.328 [INFO][5661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" HandleID="k8s-pod-network.6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Workload="localhost-k8s-whisker--79b496d88c--swmdn-eth0" Sep 13 00:17:36.345101 containerd[1450]: 2025-09-13 00:17:36.328 [INFO][5661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:36.345101 containerd[1450]: 2025-09-13 00:17:36.328 [INFO][5661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:36.345101 containerd[1450]: 2025-09-13 00:17:36.336 [WARNING][5661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" HandleID="k8s-pod-network.6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Workload="localhost-k8s-whisker--79b496d88c--swmdn-eth0" Sep 13 00:17:36.345101 containerd[1450]: 2025-09-13 00:17:36.336 [INFO][5661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" HandleID="k8s-pod-network.6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Workload="localhost-k8s-whisker--79b496d88c--swmdn-eth0" Sep 13 00:17:36.345101 containerd[1450]: 2025-09-13 00:17:36.338 [INFO][5661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:36.345101 containerd[1450]: 2025-09-13 00:17:36.341 [INFO][5652] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc" Sep 13 00:17:36.345524 containerd[1450]: time="2025-09-13T00:17:36.345154066Z" level=info msg="TearDown network for sandbox \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\" successfully" Sep 13 00:17:36.350198 containerd[1450]: time="2025-09-13T00:17:36.350041124Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:36.350198 containerd[1450]: time="2025-09-13T00:17:36.350125925Z" level=info msg="RemovePodSandbox \"6b40a6eeb906ed3a363745b5e898622b0344ef3886b26b6b5dcdd026ba1431fc\" returns successfully" Sep 13 00:17:36.350878 containerd[1450]: time="2025-09-13T00:17:36.350837088Z" level=info msg="StopPodSandbox for \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\"" Sep 13 00:17:36.438407 containerd[1450]: 2025-09-13 00:17:36.395 [WARNING][5679] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0", GenerateName:"calico-apiserver-5b4b4787d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"052be84e-22d7-4ee1-a92b-8fcfaea0b69e", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4b4787d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c", Pod:"calico-apiserver-5b4b4787d4-9z5h7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7faad82068f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:36.438407 containerd[1450]: 2025-09-13 00:17:36.395 [INFO][5679] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Sep 13 00:17:36.438407 containerd[1450]: 2025-09-13 00:17:36.395 [INFO][5679] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" iface="eth0" netns="" Sep 13 00:17:36.438407 containerd[1450]: 2025-09-13 00:17:36.395 [INFO][5679] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Sep 13 00:17:36.438407 containerd[1450]: 2025-09-13 00:17:36.395 [INFO][5679] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Sep 13 00:17:36.438407 containerd[1450]: 2025-09-13 00:17:36.422 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" HandleID="k8s-pod-network.b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:36.438407 containerd[1450]: 2025-09-13 00:17:36.423 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:36.438407 containerd[1450]: 2025-09-13 00:17:36.423 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:36.438407 containerd[1450]: 2025-09-13 00:17:36.429 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" HandleID="k8s-pod-network.b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:36.438407 containerd[1450]: 2025-09-13 00:17:36.429 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" HandleID="k8s-pod-network.b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:36.438407 containerd[1450]: 2025-09-13 00:17:36.431 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:36.438407 containerd[1450]: 2025-09-13 00:17:36.434 [INFO][5679] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Sep 13 00:17:36.439044 containerd[1450]: time="2025-09-13T00:17:36.438450041Z" level=info msg="TearDown network for sandbox \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\" successfully" Sep 13 00:17:36.439044 containerd[1450]: time="2025-09-13T00:17:36.438478173Z" level=info msg="StopPodSandbox for \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\" returns successfully" Sep 13 00:17:36.439111 containerd[1450]: time="2025-09-13T00:17:36.439084218Z" level=info msg="RemovePodSandbox for \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\"" Sep 13 00:17:36.439141 containerd[1450]: time="2025-09-13T00:17:36.439120998Z" level=info msg="Forcibly stopping sandbox \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\"" Sep 13 00:17:36.521299 containerd[1450]: 2025-09-13 00:17:36.478 [WARNING][5706] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0", GenerateName:"calico-apiserver-5b4b4787d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"052be84e-22d7-4ee1-a92b-8fcfaea0b69e", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4b4787d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c", Pod:"calico-apiserver-5b4b4787d4-9z5h7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7faad82068f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:36.521299 containerd[1450]: 2025-09-13 00:17:36.478 [INFO][5706] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Sep 13 00:17:36.521299 containerd[1450]: 2025-09-13 00:17:36.479 [INFO][5706] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" iface="eth0" netns="" Sep 13 00:17:36.521299 containerd[1450]: 2025-09-13 00:17:36.479 [INFO][5706] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Sep 13 00:17:36.521299 containerd[1450]: 2025-09-13 00:17:36.479 [INFO][5706] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Sep 13 00:17:36.521299 containerd[1450]: 2025-09-13 00:17:36.506 [INFO][5715] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" HandleID="k8s-pod-network.b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:36.521299 containerd[1450]: 2025-09-13 00:17:36.507 [INFO][5715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:36.521299 containerd[1450]: 2025-09-13 00:17:36.507 [INFO][5715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:36.521299 containerd[1450]: 2025-09-13 00:17:36.513 [WARNING][5715] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" HandleID="k8s-pod-network.b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:36.521299 containerd[1450]: 2025-09-13 00:17:36.513 [INFO][5715] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" HandleID="k8s-pod-network.b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Workload="localhost-k8s-calico--apiserver--5b4b4787d4--9z5h7-eth0" Sep 13 00:17:36.521299 containerd[1450]: 2025-09-13 00:17:36.515 [INFO][5715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:36.521299 containerd[1450]: 2025-09-13 00:17:36.518 [INFO][5706] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a" Sep 13 00:17:36.521776 containerd[1450]: time="2025-09-13T00:17:36.521392823Z" level=info msg="TearDown network for sandbox \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\" successfully" Sep 13 00:17:36.836543 containerd[1450]: time="2025-09-13T00:17:36.836430191Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:36.836543 containerd[1450]: time="2025-09-13T00:17:36.836554555Z" level=info msg="RemovePodSandbox \"b47d970f35f3be7e7299ce94a01297ee1833c17d0aecdbac5526d089c3a6e14a\" returns successfully" Sep 13 00:17:36.837252 containerd[1450]: time="2025-09-13T00:17:36.837214391Z" level=info msg="StopPodSandbox for \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\"" Sep 13 00:17:36.920503 containerd[1450]: 2025-09-13 00:17:36.883 [WARNING][5734] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k54nv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a380725-fcdb-4d95-86f1-35ff10380123", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2", Pod:"csi-node-driver-k54nv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16d3fc79384", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:36.920503 containerd[1450]: 2025-09-13 00:17:36.884 [INFO][5734] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Sep 13 00:17:36.920503 containerd[1450]: 2025-09-13 00:17:36.884 [INFO][5734] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" iface="eth0" netns="" Sep 13 00:17:36.920503 containerd[1450]: 2025-09-13 00:17:36.884 [INFO][5734] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Sep 13 00:17:36.920503 containerd[1450]: 2025-09-13 00:17:36.884 [INFO][5734] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Sep 13 00:17:36.920503 containerd[1450]: 2025-09-13 00:17:36.907 [INFO][5743] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" HandleID="k8s-pod-network.fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Workload="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:36.920503 containerd[1450]: 2025-09-13 00:17:36.907 [INFO][5743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:36.920503 containerd[1450]: 2025-09-13 00:17:36.907 [INFO][5743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:36.920503 containerd[1450]: 2025-09-13 00:17:36.913 [WARNING][5743] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" HandleID="k8s-pod-network.fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Workload="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:36.920503 containerd[1450]: 2025-09-13 00:17:36.913 [INFO][5743] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" HandleID="k8s-pod-network.fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Workload="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:36.920503 containerd[1450]: 2025-09-13 00:17:36.915 [INFO][5743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:36.920503 containerd[1450]: 2025-09-13 00:17:36.917 [INFO][5734] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Sep 13 00:17:36.920503 containerd[1450]: time="2025-09-13T00:17:36.920492928Z" level=info msg="TearDown network for sandbox \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\" successfully" Sep 13 00:17:36.921045 containerd[1450]: time="2025-09-13T00:17:36.920535659Z" level=info msg="StopPodSandbox for \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\" returns successfully" Sep 13 00:17:36.921355 containerd[1450]: time="2025-09-13T00:17:36.921314630Z" level=info msg="RemovePodSandbox for \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\"" Sep 13 00:17:36.921418 containerd[1450]: time="2025-09-13T00:17:36.921372059Z" level=info msg="Forcibly stopping sandbox \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\"" Sep 13 00:17:37.005217 containerd[1450]: 2025-09-13 00:17:36.961 [WARNING][5761] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k54nv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a380725-fcdb-4d95-86f1-35ff10380123", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2", Pod:"csi-node-driver-k54nv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16d3fc79384", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:37.005217 containerd[1450]: 2025-09-13 00:17:36.962 [INFO][5761] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Sep 13 00:17:37.005217 containerd[1450]: 2025-09-13 00:17:36.962 [INFO][5761] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" iface="eth0" netns="" Sep 13 00:17:37.005217 containerd[1450]: 2025-09-13 00:17:36.962 [INFO][5761] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Sep 13 00:17:37.005217 containerd[1450]: 2025-09-13 00:17:36.962 [INFO][5761] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Sep 13 00:17:37.005217 containerd[1450]: 2025-09-13 00:17:36.989 [INFO][5770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" HandleID="k8s-pod-network.fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Workload="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:37.005217 containerd[1450]: 2025-09-13 00:17:36.989 [INFO][5770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:37.005217 containerd[1450]: 2025-09-13 00:17:36.989 [INFO][5770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:37.005217 containerd[1450]: 2025-09-13 00:17:36.996 [WARNING][5770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" HandleID="k8s-pod-network.fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Workload="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:37.005217 containerd[1450]: 2025-09-13 00:17:36.996 [INFO][5770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" HandleID="k8s-pod-network.fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Workload="localhost-k8s-csi--node--driver--k54nv-eth0" Sep 13 00:17:37.005217 containerd[1450]: 2025-09-13 00:17:36.998 [INFO][5770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:37.005217 containerd[1450]: 2025-09-13 00:17:37.001 [INFO][5761] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f" Sep 13 00:17:37.009522 containerd[1450]: time="2025-09-13T00:17:37.005304669Z" level=info msg="TearDown network for sandbox \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\" successfully" Sep 13 00:17:37.015369 containerd[1450]: time="2025-09-13T00:17:37.015271813Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:37.015369 containerd[1450]: time="2025-09-13T00:17:37.015374898Z" level=info msg="RemovePodSandbox \"fd04fde7cdb8831753c0dc32c5b971ba967da253e174c5e2c286331a3b4c7c8f\" returns successfully" Sep 13 00:17:38.895562 containerd[1450]: time="2025-09-13T00:17:38.895460647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:38.896703 containerd[1450]: time="2025-09-13T00:17:38.896661420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:17:38.898154 containerd[1450]: time="2025-09-13T00:17:38.898122465Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:38.902546 containerd[1450]: time="2025-09-13T00:17:38.902495020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:38.903411 containerd[1450]: time="2025-09-13T00:17:38.903352222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.633602275s" Sep 13 00:17:38.903411 containerd[1450]: time="2025-09-13T00:17:38.903392739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:17:38.904536 containerd[1450]: time="2025-09-13T00:17:38.904499213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:17:38.906009 containerd[1450]: time="2025-09-13T00:17:38.905962242Z" level=info msg="CreateContainer within sandbox \"37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:17:38.936513 containerd[1450]: time="2025-09-13T00:17:38.936434922Z" level=info msg="CreateContainer within sandbox \"37f686bdc2ce2391c2b7ddd87a45f0e77a6aa2e1eee2890127a504428fd9626b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3a8a8aa1b17f693ba634f484f252c1bfc3997087ea01303af2a02ddf18d7eda8\"" Sep 13 00:17:38.937368 containerd[1450]: time="2025-09-13T00:17:38.937287205Z" level=info msg="StartContainer for \"3a8a8aa1b17f693ba634f484f252c1bfc3997087ea01303af2a02ddf18d7eda8\"" Sep 13 00:17:38.976147 systemd[1]: Started cri-containerd-3a8a8aa1b17f693ba634f484f252c1bfc3997087ea01303af2a02ddf18d7eda8.scope - libcontainer container 3a8a8aa1b17f693ba634f484f252c1bfc3997087ea01303af2a02ddf18d7eda8. Sep 13 00:17:39.029291 containerd[1450]: time="2025-09-13T00:17:39.029224903Z" level=info msg="StartContainer for \"3a8a8aa1b17f693ba634f484f252c1bfc3997087ea01303af2a02ddf18d7eda8\" returns successfully" Sep 13 00:17:39.307613 containerd[1450]: time="2025-09-13T00:17:39.307434628Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:39.308840 containerd[1450]: time="2025-09-13T00:17:39.308739169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:17:39.311407 containerd[1450]: time="2025-09-13T00:17:39.311367850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 406.824483ms" Sep 13 00:17:39.311479 containerd[1450]: time="2025-09-13T00:17:39.311408967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:17:39.314988 containerd[1450]: time="2025-09-13T00:17:39.314954815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:17:39.316065 containerd[1450]: time="2025-09-13T00:17:39.316004784Z" level=info msg="CreateContainer within sandbox \"4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:17:39.331106 containerd[1450]: time="2025-09-13T00:17:39.331040167Z" level=info msg="CreateContainer within sandbox \"4684fea87af2f7d21f8f446f7b555a1dca2efb7d4708a393e41eae701e144e2c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"56ce2d49cf65bf1c0b46d75f82c1045fa57c1f7dac15e3505d92467dfcaf1568\"" Sep 13 00:17:39.331878 containerd[1450]: time="2025-09-13T00:17:39.331792252Z" level=info msg="StartContainer for \"56ce2d49cf65bf1c0b46d75f82c1045fa57c1f7dac15e3505d92467dfcaf1568\"" Sep 13 00:17:39.366002 systemd[1]: Started cri-containerd-56ce2d49cf65bf1c0b46d75f82c1045fa57c1f7dac15e3505d92467dfcaf1568.scope - libcontainer container 56ce2d49cf65bf1c0b46d75f82c1045fa57c1f7dac15e3505d92467dfcaf1568. Sep 13 00:17:39.421900 containerd[1450]: time="2025-09-13T00:17:39.421832155Z" level=info msg="StartContainer for \"56ce2d49cf65bf1c0b46d75f82c1045fa57c1f7dac15e3505d92467dfcaf1568\" returns successfully" Sep 13 00:17:39.647185 kubelet[2541]: I0913 00:17:39.647101 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b4b4787d4-9z5h7" podStartSLOduration=33.770856981 podStartE2EDuration="47.647068547s" podCreationTimestamp="2025-09-13 00:16:52 +0000 UTC" firstStartedPulling="2025-09-13 00:17:25.436352389 +0000 UTC m=+51.021954615" lastFinishedPulling="2025-09-13 00:17:39.312563935 +0000 UTC m=+64.898166181" observedRunningTime="2025-09-13 00:17:39.621119167 +0000 UTC m=+65.206721403" watchObservedRunningTime="2025-09-13 00:17:39.647068547 +0000 UTC m=+65.232670783" Sep 13 00:17:40.590835 kubelet[2541]: I0913 00:17:40.590632 2541 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:17:40.591413 kubelet[2541]: I0913 00:17:40.590987 2541 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:17:40.682427 systemd[1]: Started sshd@13-10.0.0.139:22-10.0.0.1:59684.service - OpenSSH per-connection server daemon (10.0.0.1:59684). Sep 13 00:17:40.746276 sshd[5876]: Accepted publickey for core from 10.0.0.1 port 59684 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:17:40.748292 sshd[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:40.760044 systemd-logind[1432]: New session 14 of user core. Sep 13 00:17:40.775125 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:17:41.075257 sshd[5876]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:41.081508 systemd[1]: sshd@13-10.0.0.139:22-10.0.0.1:59684.service: Deactivated successfully. Sep 13 00:17:41.083909 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:17:41.085031 systemd-logind[1432]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:17:41.086315 systemd-logind[1432]: Removed session 14. Sep 13 00:17:43.943145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560496152.mount: Deactivated successfully. Sep 13 00:17:43.959732 containerd[1450]: time="2025-09-13T00:17:43.959645960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:43.960731 containerd[1450]: time="2025-09-13T00:17:43.960607990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:17:43.961978 containerd[1450]: time="2025-09-13T00:17:43.961929563Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:43.965204 containerd[1450]: time="2025-09-13T00:17:43.965158262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:43.966383 containerd[1450]: time="2025-09-13T00:17:43.966322084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.651330098s" Sep 13 00:17:43.966383 containerd[1450]: time="2025-09-13T00:17:43.966366159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:17:43.986422 containerd[1450]: time="2025-09-13T00:17:43.986346249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:17:43.987472 containerd[1450]: time="2025-09-13T00:17:43.987400613Z" level=info msg="CreateContainer within sandbox \"55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:17:44.017718 containerd[1450]: time="2025-09-13T00:17:44.017627975Z" level=info msg="CreateContainer within sandbox \"55a91451cbc8604b6eacffd91116cbbfb5b8eeac9abdf901569e92eb6c0123d6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0208982eea85b6eb5d35febf64b07ff508a4498493e7cc1e86d342235328e9f1\"" Sep 13 00:17:44.018428 containerd[1450]: time="2025-09-13T00:17:44.018352965Z" level=info msg="StartContainer for \"0208982eea85b6eb5d35febf64b07ff508a4498493e7cc1e86d342235328e9f1\"" Sep 13 00:17:44.095160 systemd[1]: Started cri-containerd-0208982eea85b6eb5d35febf64b07ff508a4498493e7cc1e86d342235328e9f1.scope - libcontainer container 0208982eea85b6eb5d35febf64b07ff508a4498493e7cc1e86d342235328e9f1. Sep 13 00:17:44.154465 containerd[1450]: time="2025-09-13T00:17:44.153634557Z" level=info msg="StartContainer for \"0208982eea85b6eb5d35febf64b07ff508a4498493e7cc1e86d342235328e9f1\" returns successfully" Sep 13 00:17:44.624943 kubelet[2541]: I0913 00:17:44.624852 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b4b4787d4-c79sh" podStartSLOduration=37.043194223 podStartE2EDuration="52.624831674s" podCreationTimestamp="2025-09-13 00:16:52 +0000 UTC" firstStartedPulling="2025-09-13 00:17:23.322656604 +0000 UTC m=+48.908258840" lastFinishedPulling="2025-09-13 00:17:38.904294055 +0000 UTC m=+64.489896291" observedRunningTime="2025-09-13 00:17:39.651468963 +0000 UTC m=+65.237071199" watchObservedRunningTime="2025-09-13 00:17:44.624831674 +0000 UTC m=+70.210433910" Sep 13 00:17:44.625590 kubelet[2541]: I0913 00:17:44.625031 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6bc8bd7fbb-r6htj" podStartSLOduration=2.175296909 podStartE2EDuration="23.625026604s" podCreationTimestamp="2025-09-13 00:17:21 +0000 UTC" firstStartedPulling="2025-09-13 00:17:22.517517459 +0000 UTC m=+48.103119695" lastFinishedPulling="2025-09-13 00:17:43.967247154 +0000 UTC m=+69.552849390" observedRunningTime="2025-09-13 00:17:44.623364121 +0000 UTC m=+70.208966357" watchObservedRunningTime="2025-09-13 00:17:44.625026604 +0000 UTC m=+70.210628840" Sep 13 00:17:45.519580 kubelet[2541]: E0913 00:17:45.519477 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:46.089277 systemd[1]: Started sshd@14-10.0.0.139:22-10.0.0.1:59690.service - OpenSSH per-connection server daemon (10.0.0.1:59690). Sep 13 00:17:46.189181 sshd[5974]: Accepted publickey for core from 10.0.0.1 port 59690 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:17:46.191422 sshd[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:46.196846 systemd-logind[1432]: New session 15 of user core. Sep 13 00:17:46.207000 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:17:46.514250 sshd[5974]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:46.520485 systemd[1]: sshd@14-10.0.0.139:22-10.0.0.1:59690.service: Deactivated successfully. Sep 13 00:17:46.524660 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:17:46.531190 systemd-logind[1432]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:17:46.532406 systemd-logind[1432]: Removed session 15. Sep 13 00:17:46.983892 containerd[1450]: time="2025-09-13T00:17:46.983759156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:46.985291 containerd[1450]: time="2025-09-13T00:17:46.985240860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:17:46.987376 containerd[1450]: time="2025-09-13T00:17:46.987289296Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:46.989979 containerd[1450]: time="2025-09-13T00:17:46.989909790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:46.991134 containerd[1450]: time="2025-09-13T00:17:46.990707411Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.004290668s" Sep 13 00:17:46.991134 containerd[1450]: time="2025-09-13T00:17:46.990741596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:17:46.993157 containerd[1450]: time="2025-09-13T00:17:46.993112655Z" level=info msg="CreateContainer within sandbox \"a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:17:47.021204 containerd[1450]: time="2025-09-13T00:17:47.021141160Z" level=info msg="CreateContainer within sandbox \"a0d0b158ae8667cf3ccbfb5261d7f016e5d83f0fbaf898994598a12db984a7b2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2144cdd8b8d63f825cc575cf131eb55a1c740d7063629807c05073144dce5bc9\"" Sep 13 00:17:47.021822 containerd[1450]: time="2025-09-13T00:17:47.021777373Z" level=info msg="StartContainer for \"2144cdd8b8d63f825cc575cf131eb55a1c740d7063629807c05073144dce5bc9\"" Sep 13 00:17:47.074078 systemd[1]: Started cri-containerd-2144cdd8b8d63f825cc575cf131eb55a1c740d7063629807c05073144dce5bc9.scope - libcontainer container 2144cdd8b8d63f825cc575cf131eb55a1c740d7063629807c05073144dce5bc9. Sep 13 00:17:47.223967 containerd[1450]: time="2025-09-13T00:17:47.223863845Z" level=info msg="StartContainer for \"2144cdd8b8d63f825cc575cf131eb55a1c740d7063629807c05073144dce5bc9\" returns successfully" Sep 13 00:17:47.565438 kubelet[2541]: I0913 00:17:47.565374 2541 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:17:47.565438 kubelet[2541]: I0913 00:17:47.565438 2541 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:17:47.948577 kubelet[2541]: I0913 00:17:47.948511 2541 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:17:48.031191 kubelet[2541]: I0913 00:17:48.031120 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k54nv" podStartSLOduration=28.206353973 podStartE2EDuration="52.03110093s" podCreationTimestamp="2025-09-13 00:16:56 +0000 UTC" firstStartedPulling="2025-09-13 00:17:23.166931685 +0000 UTC m=+48.752533921" lastFinishedPulling="2025-09-13 00:17:46.991678642 +0000 UTC m=+72.577280878" observedRunningTime="2025-09-13 00:17:48.030503018 +0000 UTC m=+73.616105255" watchObservedRunningTime="2025-09-13 00:17:48.03110093 +0000 UTC m=+73.616703166" Sep 13 00:17:50.520956 kubelet[2541]: E0913 00:17:50.519509 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:50.530299 systemd[1]: run-containerd-runc-k8s.io-8fd13e039119ab8aca59298062b8755e7b6e9539908fdd2b20ea5bd792039680-runc.wOjPAs.mount: Deactivated successfully. Sep 13 00:17:51.532211 systemd[1]: Started sshd@15-10.0.0.139:22-10.0.0.1:37836.service - OpenSSH per-connection server daemon (10.0.0.1:37836). Sep 13 00:17:51.589148 sshd[6079]: Accepted publickey for core from 10.0.0.1 port 37836 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:17:51.593148 sshd[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:51.598025 systemd-logind[1432]: New session 16 of user core. Sep 13 00:17:51.603966 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:17:52.040006 sshd[6079]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:52.044626 systemd[1]: sshd@15-10.0.0.139:22-10.0.0.1:37836.service: Deactivated successfully. Sep 13 00:17:52.047196 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:17:52.048049 systemd-logind[1432]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:17:52.049259 systemd-logind[1432]: Removed session 16. Sep 13 00:17:57.052898 systemd[1]: Started sshd@16-10.0.0.139:22-10.0.0.1:37850.service - OpenSSH per-connection server daemon (10.0.0.1:37850). Sep 13 00:17:57.095680 sshd[6118]: Accepted publickey for core from 10.0.0.1 port 37850 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:17:57.097839 sshd[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:57.102928 systemd-logind[1432]: New session 17 of user core. Sep 13 00:17:57.112080 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:17:57.366878 sshd[6118]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:57.374902 systemd[1]: sshd@16-10.0.0.139:22-10.0.0.1:37850.service: Deactivated successfully. Sep 13 00:17:57.377212 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:17:57.377889 systemd-logind[1432]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:17:57.379006 systemd-logind[1432]: Removed session 17. Sep 13 00:18:02.379228 systemd[1]: Started sshd@17-10.0.0.139:22-10.0.0.1:42560.service - OpenSSH per-connection server daemon (10.0.0.1:42560). Sep 13 00:18:02.420480 sshd[6176]: Accepted publickey for core from 10.0.0.1 port 42560 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:18:02.422277 sshd[6176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:02.426910 systemd-logind[1432]: New session 18 of user core. Sep 13 00:18:02.434965 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:18:02.990380 sshd[6176]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:03.006982 systemd[1]: sshd@17-10.0.0.139:22-10.0.0.1:42560.service: Deactivated successfully. Sep 13 00:18:03.009845 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:18:03.011781 systemd-logind[1432]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:18:03.024238 systemd[1]: Started sshd@18-10.0.0.139:22-10.0.0.1:42570.service - OpenSSH per-connection server daemon (10.0.0.1:42570). Sep 13 00:18:03.025841 systemd-logind[1432]: Removed session 18. Sep 13 00:18:03.056051 sshd[6190]: Accepted publickey for core from 10.0.0.1 port 42570 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:18:03.057949 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:03.063672 systemd-logind[1432]: New session 19 of user core. Sep 13 00:18:03.078951 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:18:04.115933 sshd[6190]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:04.126227 systemd[1]: sshd@18-10.0.0.139:22-10.0.0.1:42570.service: Deactivated successfully. Sep 13 00:18:04.128518 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:18:04.130317 systemd-logind[1432]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:18:04.132910 systemd[1]: Started sshd@19-10.0.0.139:22-10.0.0.1:42584.service - OpenSSH per-connection server daemon (10.0.0.1:42584). Sep 13 00:18:04.133954 systemd-logind[1432]: Removed session 19. Sep 13 00:18:04.187608 sshd[6208]: Accepted publickey for core from 10.0.0.1 port 42584 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:18:04.189645 sshd[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:04.194828 systemd-logind[1432]: New session 20 of user core. Sep 13 00:18:04.205150 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:18:04.872527 sshd[6208]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:04.882836 systemd[1]: sshd@19-10.0.0.139:22-10.0.0.1:42584.service: Deactivated successfully. Sep 13 00:18:04.885269 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:18:04.887798 systemd-logind[1432]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:18:04.894310 systemd[1]: Started sshd@20-10.0.0.139:22-10.0.0.1:42596.service - OpenSSH per-connection server daemon (10.0.0.1:42596). Sep 13 00:18:04.897208 systemd-logind[1432]: Removed session 20. Sep 13 00:18:04.933222 sshd[6229]: Accepted publickey for core from 10.0.0.1 port 42596 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:18:04.935146 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:04.940244 systemd-logind[1432]: New session 21 of user core. Sep 13 00:18:04.956029 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:18:05.347055 sshd[6229]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:05.356418 systemd[1]: sshd@20-10.0.0.139:22-10.0.0.1:42596.service: Deactivated successfully. Sep 13 00:18:05.359241 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:18:05.362647 systemd-logind[1432]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:18:05.371285 systemd[1]: Started sshd@21-10.0.0.139:22-10.0.0.1:42606.service - OpenSSH per-connection server daemon (10.0.0.1:42606). Sep 13 00:18:05.373877 systemd-logind[1432]: Removed session 21. Sep 13 00:18:05.410720 sshd[6242]: Accepted publickey for core from 10.0.0.1 port 42606 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:18:05.413186 sshd[6242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:05.418834 systemd-logind[1432]: New session 22 of user core. Sep 13 00:18:05.428103 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:18:05.565391 sshd[6242]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:05.570684 systemd[1]: sshd@21-10.0.0.139:22-10.0.0.1:42606.service: Deactivated successfully. Sep 13 00:18:05.573353 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:18:05.574568 systemd-logind[1432]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:18:05.575688 systemd-logind[1432]: Removed session 22. Sep 13 00:18:05.880596 kubelet[2541]: I0913 00:18:05.879980 2541 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:18:09.519729 kubelet[2541]: E0913 00:18:09.519667 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:10.522159 kubelet[2541]: E0913 00:18:10.522112 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:10.585191 systemd[1]: Started sshd@22-10.0.0.139:22-10.0.0.1:38206.service - OpenSSH per-connection server daemon (10.0.0.1:38206). Sep 13 00:18:10.624118 sshd[6262]: Accepted publickey for core from 10.0.0.1 port 38206 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:18:10.625888 sshd[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:10.630321 systemd-logind[1432]: New session 23 of user core. Sep 13 00:18:10.635942 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:18:10.784057 sshd[6262]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:10.788726 systemd[1]: sshd@22-10.0.0.139:22-10.0.0.1:38206.service: Deactivated successfully. Sep 13 00:18:10.790786 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:18:10.791441 systemd-logind[1432]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:18:10.792484 systemd-logind[1432]: Removed session 23. Sep 13 00:18:15.795833 systemd[1]: Started sshd@23-10.0.0.139:22-10.0.0.1:38212.service - OpenSSH per-connection server daemon (10.0.0.1:38212). Sep 13 00:18:15.828605 sshd[6281]: Accepted publickey for core from 10.0.0.1 port 38212 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:18:15.830364 sshd[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:15.834541 systemd-logind[1432]: New session 24 of user core. Sep 13 00:18:15.846964 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:18:15.959565 sshd[6281]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:15.964180 systemd[1]: sshd@23-10.0.0.139:22-10.0.0.1:38212.service: Deactivated successfully. Sep 13 00:18:15.967215 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:18:15.967931 systemd-logind[1432]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:18:15.968901 systemd-logind[1432]: Removed session 24. Sep 13 00:18:20.982140 systemd[1]: Started sshd@24-10.0.0.139:22-10.0.0.1:50660.service - OpenSSH per-connection server daemon (10.0.0.1:50660). Sep 13 00:18:21.101129 sshd[6296]: Accepted publickey for core from 10.0.0.1 port 50660 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:18:21.103222 sshd[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:21.108114 systemd-logind[1432]: New session 25 of user core. Sep 13 00:18:21.116002 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:18:21.361591 sshd[6296]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:21.369271 systemd[1]: sshd@24-10.0.0.139:22-10.0.0.1:50660.service: Deactivated successfully. Sep 13 00:18:21.371672 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:18:21.373479 systemd-logind[1432]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:18:21.374622 systemd-logind[1432]: Removed session 25. Sep 13 00:18:23.519071 kubelet[2541]: E0913 00:18:23.519013 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:26.387095 systemd[1]: Started sshd@25-10.0.0.139:22-10.0.0.1:50672.service - OpenSSH per-connection server daemon (10.0.0.1:50672). Sep 13 00:18:26.438138 sshd[6334]: Accepted publickey for core from 10.0.0.1 port 50672 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:18:26.440442 sshd[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:26.445666 systemd-logind[1432]: New session 26 of user core. Sep 13 00:18:26.451121 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:18:26.650224 sshd[6334]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:26.656351 systemd[1]: sshd@25-10.0.0.139:22-10.0.0.1:50672.service: Deactivated successfully. Sep 13 00:18:26.659107 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:18:26.659858 systemd-logind[1432]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:18:26.661435 systemd-logind[1432]: Removed session 26.