Sep 13 00:08:06.950489 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:08:06.950521 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:08:06.950533 kernel: BIOS-provided physical RAM map: Sep 13 00:08:06.950539 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:08:06.950545 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:08:06.950551 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:08:06.950559 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:08:06.950565 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:08:06.950572 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 13 00:08:06.950578 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 13 00:08:06.950587 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 13 00:08:06.950594 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 13 00:08:06.950603 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 13 00:08:06.950610 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 13 00:08:06.950620 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 13 00:08:06.950628 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:08:06.950638 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 13 00:08:06.950645 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 13 00:08:06.950653 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:08:06.950662 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:08:06.950670 kernel: NX (Execute Disable) protection: active Sep 13 00:08:06.950679 kernel: APIC: Static calls initialized Sep 13 00:08:06.950687 kernel: efi: EFI v2.7 by EDK II Sep 13 00:08:06.950696 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Sep 13 00:08:06.950705 kernel: SMBIOS 2.8 present. Sep 13 00:08:06.950729 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 13 00:08:06.950739 kernel: Hypervisor detected: KVM Sep 13 00:08:06.950751 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:08:06.950760 kernel: kvm-clock: using sched offset of 5376975833 cycles Sep 13 00:08:06.950769 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:08:06.950778 kernel: tsc: Detected 2794.748 MHz processor Sep 13 00:08:06.950786 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:08:06.950793 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:08:06.950800 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 13 00:08:06.950807 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 00:08:06.950814 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:08:06.950824 kernel: Using GB pages for direct mapping Sep 13 00:08:06.950839 kernel: Secure boot disabled Sep 13 00:08:06.950847 kernel: ACPI: Early table checksum verification disabled Sep 13 00:08:06.950854 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 13 00:08:06.950865 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:08:06.950873 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:08:06.950880 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:08:06.950891 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 13 00:08:06.950898 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:08:06.950908 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:08:06.950916 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:08:06.950923 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:08:06.950931 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:08:06.950938 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 13 00:08:06.950948 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 13 00:08:06.950955 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 13 00:08:06.950963 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 13 00:08:06.950970 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 13 00:08:06.950977 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 13 00:08:06.950985 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 13 00:08:06.950992 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 13 00:08:06.950999 kernel: No NUMA configuration found Sep 13 00:08:06.951009 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 13 00:08:06.951019 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 13 00:08:06.951026 kernel: Zone ranges: Sep 13 00:08:06.951034 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:08:06.951041 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 13 00:08:06.951048 kernel: Normal empty Sep 13 00:08:06.951056 kernel: Movable zone start for each node Sep 13 00:08:06.951063 kernel: Early memory node ranges Sep 13 00:08:06.951070 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:08:06.951078 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 13 00:08:06.951085 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 13 00:08:06.951095 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 13 00:08:06.951102 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 13 00:08:06.951110 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 13 00:08:06.951119 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 13 00:08:06.951127 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:08:06.951134 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:08:06.951141 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 13 00:08:06.951148 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:08:06.951156 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 13 00:08:06.951166 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:08:06.951174 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 13 00:08:06.951181 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:08:06.951189 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:08:06.951196 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:08:06.951203 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:08:06.951211 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:08:06.951218 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:08:06.951225 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:08:06.951235 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:08:06.951243 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:08:06.951250 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:08:06.951257 kernel: TSC deadline timer available Sep 13 00:08:06.951265 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:08:06.951273 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:08:06.951280 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:08:06.951287 kernel: kvm-guest: setup PV sched yield Sep 13 00:08:06.951295 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 13 00:08:06.951305 kernel: Booting paravirtualized kernel on KVM Sep 13 00:08:06.951312 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:08:06.951320 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:08:06.951327 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 13 00:08:06.951335 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 13 00:08:06.951342 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:08:06.951349 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:08:06.951356 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:08:06.951365 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:08:06.951378 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:08:06.951386 kernel: random: crng init done Sep 13 00:08:06.951393 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:08:06.951401 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:08:06.951408 kernel: Fallback order for Node 0: 0 Sep 13 00:08:06.951415 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 13 00:08:06.951423 kernel: Policy zone: DMA32 Sep 13 00:08:06.951430 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:08:06.951440 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 166140K reserved, 0K cma-reserved) Sep 13 00:08:06.951448 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:08:06.951455 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:08:06.951462 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:08:06.951470 kernel: Dynamic Preempt: voluntary Sep 13 00:08:06.951486 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:08:06.951497 kernel: rcu: RCU event tracing is enabled. Sep 13 00:08:06.951505 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:08:06.951513 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:08:06.951520 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:08:06.951528 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:08:06.951535 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:08:06.951546 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:08:06.951554 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:08:06.951564 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:08:06.951572 kernel: Console: colour dummy device 80x25 Sep 13 00:08:06.951579 kernel: printk: console [ttyS0] enabled Sep 13 00:08:06.951590 kernel: ACPI: Core revision 20230628 Sep 13 00:08:06.951598 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:08:06.951606 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:08:06.951613 kernel: x2apic enabled Sep 13 00:08:06.951621 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:08:06.951629 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 00:08:06.951637 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 00:08:06.951645 kernel: kvm-guest: setup PV IPIs Sep 13 00:08:06.951653 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:08:06.951663 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:08:06.951671 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 00:08:06.951678 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:08:06.951686 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:08:06.951694 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:08:06.951702 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:08:06.951710 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:08:06.951756 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:08:06.951766 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:08:06.951780 kernel: active return thunk: retbleed_return_thunk Sep 13 00:08:06.951790 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:08:06.951800 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:08:06.951810 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:08:06.951822 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 00:08:06.951842 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 00:08:06.951853 kernel: active return thunk: srso_return_thunk Sep 13 00:08:06.951863 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 00:08:06.951876 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:08:06.951886 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:08:06.951896 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:08:06.951905 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:08:06.951915 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:08:06.951925 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:08:06.951935 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:08:06.951944 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:08:06.951952 kernel: landlock: Up and running. Sep 13 00:08:06.951962 kernel: SELinux: Initializing. Sep 13 00:08:06.951970 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:08:06.951978 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:08:06.951986 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:08:06.951994 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:08:06.952002 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:08:06.952010 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:08:06.952018 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:08:06.952028 kernel: ... version: 0 Sep 13 00:08:06.952036 kernel: ... bit width: 48 Sep 13 00:08:06.952044 kernel: ... generic registers: 6 Sep 13 00:08:06.952051 kernel: ... value mask: 0000ffffffffffff Sep 13 00:08:06.952059 kernel: ... max period: 00007fffffffffff Sep 13 00:08:06.952067 kernel: ... fixed-purpose events: 0 Sep 13 00:08:06.952074 kernel: ... event mask: 000000000000003f Sep 13 00:08:06.952082 kernel: signal: max sigframe size: 1776 Sep 13 00:08:06.952090 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:08:06.952098 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:08:06.952108 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:08:06.952115 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:08:06.952123 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 00:08:06.952131 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:08:06.952139 kernel: smpboot: Max logical packages: 1 Sep 13 00:08:06.952147 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 00:08:06.952154 kernel: devtmpfs: initialized Sep 13 00:08:06.952162 kernel: x86/mm: Memory block size: 128MB Sep 13 00:08:06.952170 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 13 00:08:06.952180 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 13 00:08:06.952188 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 13 00:08:06.952196 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 13 00:08:06.952204 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 13 00:08:06.952212 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:08:06.952219 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:08:06.952227 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:08:06.952235 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:08:06.952243 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:08:06.952254 kernel: audit: type=2000 audit(1757722085.964:1): state=initialized audit_enabled=0 res=1 Sep 13 00:08:06.952261 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:08:06.952269 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:08:06.952277 kernel: cpuidle: using governor menu Sep 13 00:08:06.952285 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:08:06.952292 kernel: dca service started, version 1.12.1 Sep 13 00:08:06.952300 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:08:06.952308 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 00:08:06.952316 kernel: PCI: Using configuration type 1 for base access Sep 13 00:08:06.952326 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:08:06.952334 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:08:06.952342 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:08:06.952350 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:08:06.952357 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:08:06.952365 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:08:06.952373 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:08:06.952381 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:08:06.952389 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:08:06.952399 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:08:06.952407 kernel: ACPI: Interpreter enabled Sep 13 00:08:06.952414 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:08:06.952422 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:08:06.952430 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:08:06.952438 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:08:06.952446 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:08:06.952453 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:08:06.952675 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:08:06.952848 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:08:06.952984 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:08:06.952994 kernel: PCI host bridge to bus 0000:00 Sep 13 00:08:06.953164 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:08:06.953288 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:08:06.953423 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:08:06.953579 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:08:06.953923 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:08:06.954074 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 13 00:08:06.954229 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:08:06.954436 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:08:06.954618 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:08:06.954855 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 13 00:08:06.955010 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 13 00:08:06.955139 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:08:06.955270 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 13 00:08:06.955398 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:08:06.955550 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:08:06.955680 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 13 00:08:06.955859 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 13 00:08:06.955988 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 13 00:08:06.956153 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:08:06.956284 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 13 00:08:06.956411 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 13 00:08:06.956537 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 13 00:08:06.956676 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:08:06.956847 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 13 00:08:06.956977 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 13 00:08:06.957105 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 13 00:08:06.957232 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 13 00:08:06.957375 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:08:06.957506 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:08:06.957652 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:08:06.957810 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 13 00:08:06.957950 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 13 00:08:06.958097 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:08:06.958254 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 13 00:08:06.958268 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:08:06.958276 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:08:06.958284 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:08:06.958297 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:08:06.958305 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:08:06.958313 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:08:06.958321 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:08:06.958329 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:08:06.958336 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:08:06.958344 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:08:06.958352 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:08:06.958360 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:08:06.958370 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:08:06.958378 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:08:06.958386 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:08:06.958394 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:08:06.958402 kernel: iommu: Default domain type: Translated Sep 13 00:08:06.958410 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:08:06.958417 kernel: efivars: Registered efivars operations Sep 13 00:08:06.958425 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:08:06.958433 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:08:06.958444 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 13 00:08:06.958451 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 13 00:08:06.958459 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 13 00:08:06.958467 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 13 00:08:06.958599 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:08:06.958744 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:08:06.958893 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:08:06.958904 kernel: vgaarb: loaded Sep 13 00:08:06.958912 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:08:06.958924 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:08:06.958932 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:08:06.958940 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:08:06.958948 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:08:06.958956 kernel: pnp: PnP ACPI init Sep 13 00:08:06.959113 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:08:06.959125 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:08:06.959133 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:08:06.959145 kernel: NET: Registered PF_INET protocol family Sep 13 00:08:06.959153 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:08:06.959161 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:08:06.959169 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:08:06.959177 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:08:06.959185 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:08:06.959192 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:08:06.959200 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:08:06.959209 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:08:06.959223 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:08:06.959233 kernel: NET: Registered PF_XDP protocol family Sep 13 00:08:06.959373 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 13 00:08:06.959514 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 13 00:08:06.959635 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:08:06.959770 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:08:06.959924 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:08:06.960052 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:08:06.960169 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:08:06.960286 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 13 00:08:06.960296 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:08:06.960304 kernel: Initialise system trusted keyrings Sep 13 00:08:06.960313 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:08:06.960321 kernel: Key type asymmetric registered Sep 13 00:08:06.960329 kernel: Asymmetric key parser 'x509' registered Sep 13 00:08:06.960336 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:08:06.960349 kernel: io scheduler mq-deadline registered Sep 13 00:08:06.960356 kernel: io scheduler kyber registered Sep 13 00:08:06.960364 kernel: io scheduler bfq registered Sep 13 00:08:06.960372 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:08:06.960381 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:08:06.960389 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:08:06.960397 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:08:06.960405 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:08:06.960413 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:08:06.960424 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:08:06.960432 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:08:06.960439 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:08:06.960616 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:08:06.960629 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:08:06.960769 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:08:06.960917 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:08:06 UTC (1757722086) Sep 13 00:08:06.961065 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:08:06.961083 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 00:08:06.961091 kernel: efifb: probing for efifb Sep 13 00:08:06.961099 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 13 00:08:06.961107 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 13 00:08:06.961115 kernel: efifb: scrolling: redraw Sep 13 00:08:06.961123 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 13 00:08:06.961131 kernel: Console: switching to colour frame buffer device 100x37 Sep 13 00:08:06.961156 kernel: fb0: EFI VGA frame buffer device Sep 13 00:08:06.961167 kernel: pstore: Using crash dump compression: deflate Sep 13 00:08:06.961178 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:08:06.961186 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:08:06.961194 kernel: Segment Routing with IPv6 Sep 13 00:08:06.961202 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:08:06.961210 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:08:06.961218 kernel: Key type dns_resolver registered Sep 13 00:08:06.961226 kernel: IPI shorthand broadcast: enabled Sep 13 00:08:06.961234 kernel: sched_clock: Marking stable (992003992, 138367250)->(1186364618, -55993376) Sep 13 00:08:06.961242 kernel: registered taskstats version 1 Sep 13 00:08:06.961253 kernel: Loading compiled-in X.509 certificates Sep 13 00:08:06.961261 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:08:06.961269 kernel: Key type .fscrypt registered Sep 13 00:08:06.961277 kernel: Key type fscrypt-provisioning registered Sep 13 00:08:06.961285 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:08:06.961294 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:08:06.961302 kernel: ima: No architecture policies found Sep 13 00:08:06.961310 kernel: clk: Disabling unused clocks Sep 13 00:08:06.961318 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:08:06.961329 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:08:06.961337 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:08:06.961345 kernel: Run /init as init process Sep 13 00:08:06.961353 kernel: with arguments: Sep 13 00:08:06.961361 kernel: /init Sep 13 00:08:06.961369 kernel: with environment: Sep 13 00:08:06.961377 kernel: HOME=/ Sep 13 00:08:06.961388 kernel: TERM=linux Sep 13 00:08:06.961396 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:08:06.961409 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:08:06.961419 systemd[1]: Detected virtualization kvm. Sep 13 00:08:06.961428 systemd[1]: Detected architecture x86-64. Sep 13 00:08:06.961436 systemd[1]: Running in initrd. Sep 13 00:08:06.961449 systemd[1]: No hostname configured, using default hostname. Sep 13 00:08:06.961458 systemd[1]: Hostname set to . Sep 13 00:08:06.961467 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:08:06.961475 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:08:06.961484 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:08:06.961492 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:08:06.961502 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:08:06.961510 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:08:06.961521 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:08:06.961530 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:08:06.961540 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:08:06.961550 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:08:06.961558 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:08:06.961567 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:08:06.961575 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:08:06.961586 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:08:06.961595 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:08:06.961603 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:08:06.961612 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:08:06.961620 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:08:06.961629 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:08:06.961638 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:08:06.961646 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:08:06.961658 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:08:06.961666 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:08:06.961675 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:08:06.961683 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:08:06.961692 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:08:06.961701 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:08:06.961709 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:08:06.961760 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:08:06.961771 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:08:06.961785 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:08:06.961794 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:08:06.961804 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:08:06.961814 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:08:06.961825 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:08:06.961868 systemd-journald[193]: Collecting audit messages is disabled. Sep 13 00:08:06.961887 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:08:06.961896 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:08:06.961908 systemd-journald[193]: Journal started Sep 13 00:08:06.961926 systemd-journald[193]: Runtime Journal (/run/log/journal/a1f2d0c97be14c2081ee32fe96622315) is 6.0M, max 48.3M, 42.2M free. Sep 13 00:08:06.964741 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:08:06.964986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:08:06.969631 systemd-modules-load[194]: Inserted module 'overlay' Sep 13 00:08:06.970912 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:08:06.971693 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:08:06.978928 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:08:06.991621 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:08:06.997073 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:08:07.005063 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:08:07.014767 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:08:07.017755 kernel: Bridge firewalling registered Sep 13 00:08:07.017743 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 13 00:08:07.019154 dracut-cmdline[222]: dracut-dracut-053 Sep 13 00:08:07.021450 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:08:07.023980 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:08:07.024216 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:08:07.041381 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:08:07.051913 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:08:07.088777 systemd-resolved[257]: Positive Trust Anchors: Sep 13 00:08:07.088807 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:08:07.088861 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:08:07.091836 systemd-resolved[257]: Defaulting to hostname 'linux'. Sep 13 00:08:07.093344 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:08:07.099370 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:08:07.128771 kernel: SCSI subsystem initialized Sep 13 00:08:07.138773 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:08:07.152755 kernel: iscsi: registered transport (tcp) Sep 13 00:08:07.180424 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:08:07.180507 kernel: QLogic iSCSI HBA Driver Sep 13 00:08:07.246761 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:08:07.256906 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:08:07.285555 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:08:07.285679 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:08:07.285699 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:08:07.332787 kernel: raid6: avx2x4 gen() 29124 MB/s Sep 13 00:08:07.349778 kernel: raid6: avx2x2 gen() 30295 MB/s Sep 13 00:08:07.366899 kernel: raid6: avx2x1 gen() 24016 MB/s Sep 13 00:08:07.366981 kernel: raid6: using algorithm avx2x2 gen() 30295 MB/s Sep 13 00:08:07.385103 kernel: raid6: .... xor() 19055 MB/s, rmw enabled Sep 13 00:08:07.385205 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:08:07.409781 kernel: xor: automatically using best checksumming function avx Sep 13 00:08:07.592760 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:08:07.609646 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:08:07.620140 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:08:07.634878 systemd-udevd[412]: Using default interface naming scheme 'v255'. Sep 13 00:08:07.640266 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:08:07.650934 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:08:07.671964 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Sep 13 00:08:07.716191 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:08:07.725164 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:08:07.825821 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:08:07.837052 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:08:07.854996 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:08:07.867413 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:08:07.885858 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:08:07.891182 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:08:07.896742 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 00:08:07.898046 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:08:07.902748 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:08:07.904936 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:08:07.910306 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:08:07.910383 kernel: GPT:9289727 != 19775487 Sep 13 00:08:07.910400 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:08:07.913757 kernel: GPT:9289727 != 19775487 Sep 13 00:08:07.913838 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:08:07.913874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:08:07.917745 kernel: libata version 3.00 loaded. Sep 13 00:08:07.923196 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:08:07.927625 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:08:07.930002 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:08:07.930229 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:08:07.929911 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:08:07.960848 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:08:07.961132 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:08:07.962643 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:08:07.967871 kernel: scsi host0: ahci Sep 13 00:08:07.968113 kernel: scsi host1: ahci Sep 13 00:08:07.968321 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:08:07.968341 kernel: scsi host2: ahci Sep 13 00:08:07.965705 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:08:07.972479 kernel: scsi host3: ahci Sep 13 00:08:07.972756 kernel: AES CTR mode by8 optimization enabled Sep 13 00:08:07.965989 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:08:07.975076 kernel: scsi host4: ahci Sep 13 00:08:07.968756 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:08:07.982117 kernel: scsi host5: ahci Sep 13 00:08:07.982448 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 13 00:08:07.982462 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 13 00:08:07.982473 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 13 00:08:07.982483 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 13 00:08:07.982493 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 13 00:08:07.982504 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 13 00:08:07.984033 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:08:07.991911 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:08:07.992065 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:08:08.001750 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (475) Sep 13 00:08:08.013771 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (474) Sep 13 00:08:08.015444 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:08:08.037956 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:08:08.056385 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:08:08.061839 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:08:08.063154 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:08:08.076933 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:08:08.098370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:08:08.120133 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:08:08.122851 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:08:08.145318 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:08:08.207511 disk-uuid[553]: Primary Header is updated. Sep 13 00:08:08.207511 disk-uuid[553]: Secondary Entries is updated. Sep 13 00:08:08.207511 disk-uuid[553]: Secondary Header is updated. Sep 13 00:08:08.211751 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:08:08.217872 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:08:08.292405 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:08:08.292479 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:08:08.292494 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:08:08.294236 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:08:08.296540 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:08:08.296730 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:08:08.298083 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:08:08.298114 kernel: ata3.00: applying bridge limits Sep 13 00:08:08.299772 kernel: ata3.00: configured for UDMA/100 Sep 13 00:08:08.301749 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:08:08.357782 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:08:08.358241 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:08:08.371828 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:08:09.219561 disk-uuid[568]: The operation has completed successfully. Sep 13 00:08:09.221180 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:08:09.258095 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:08:09.258278 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:08:09.291168 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:08:09.295916 sh[595]: Success Sep 13 00:08:09.311750 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:08:09.353949 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:08:09.367378 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:08:09.371091 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:08:09.384912 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:08:09.385021 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:08:09.385040 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:08:09.386028 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:08:09.386879 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:08:09.411117 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:08:09.420158 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:08:09.432091 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:08:09.434239 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:08:09.446242 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:08:09.446289 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:08:09.446306 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:08:09.449749 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:08:09.462496 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:08:09.464383 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:08:09.547356 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:08:09.555035 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:08:09.573151 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:08:09.590960 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:08:09.621673 systemd-networkd[776]: lo: Link UP Sep 13 00:08:09.621690 systemd-networkd[776]: lo: Gained carrier Sep 13 00:08:09.623917 systemd-networkd[776]: Enumeration completed Sep 13 00:08:09.624337 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:08:09.624697 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:08:09.624703 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:08:09.626198 systemd-networkd[776]: eth0: Link UP Sep 13 00:08:09.626203 systemd-networkd[776]: eth0: Gained carrier Sep 13 00:08:09.626211 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:08:09.627288 systemd[1]: Reached target network.target - Network. Sep 13 00:08:09.644844 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:08:09.655194 ignition[761]: Ignition 2.19.0 Sep 13 00:08:09.655205 ignition[761]: Stage: fetch-offline Sep 13 00:08:09.655271 ignition[761]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:08:09.655282 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:08:09.655397 ignition[761]: parsed url from cmdline: "" Sep 13 00:08:09.655401 ignition[761]: no config URL provided Sep 13 00:08:09.655407 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:08:09.655417 ignition[761]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:08:09.655448 ignition[761]: op(1): [started] loading QEMU firmware config module Sep 13 00:08:09.655454 ignition[761]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:08:09.682548 ignition[761]: op(1): [finished] loading QEMU firmware config module Sep 13 00:08:09.720765 ignition[761]: parsing config with SHA512: 1bef3fbd7daaeaf231ba41f7f0a368f43e877af63ed1037b4ab504c097e4012fd336220add4726e42266775a215a972ba393e256a563e5b685b2401a1276b277 Sep 13 00:08:09.726857 unknown[761]: fetched base config from "system" Sep 13 00:08:09.727057 unknown[761]: fetched user config from "qemu" Sep 13 00:08:09.727439 ignition[761]: fetch-offline: fetch-offline passed Sep 13 00:08:09.727511 ignition[761]: Ignition finished successfully Sep 13 00:08:09.733020 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:08:09.736799 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:08:09.748095 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:08:09.768498 ignition[787]: Ignition 2.19.0 Sep 13 00:08:09.768512 ignition[787]: Stage: kargs Sep 13 00:08:09.768734 ignition[787]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:08:09.768756 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:08:09.769623 ignition[787]: kargs: kargs passed Sep 13 00:08:09.773416 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:08:09.769675 ignition[787]: Ignition finished successfully Sep 13 00:08:09.788101 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:08:09.807842 ignition[796]: Ignition 2.19.0 Sep 13 00:08:09.807854 ignition[796]: Stage: disks Sep 13 00:08:09.808051 ignition[796]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:08:09.808065 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:08:09.812297 ignition[796]: disks: disks passed Sep 13 00:08:09.812361 ignition[796]: Ignition finished successfully Sep 13 00:08:09.815785 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:08:09.818337 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:08:09.835509 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:08:09.838546 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:08:09.840956 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:08:09.843402 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:08:09.855937 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:08:09.886433 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:08:10.147604 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:08:10.165918 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:08:10.288774 kernel: EXT4-fs (vda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:08:10.289806 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:08:10.291855 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:08:10.315989 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:08:10.324214 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:08:10.324677 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:08:10.324772 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:08:10.324814 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:08:10.339683 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:08:10.341825 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Sep 13 00:08:10.341250 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:08:10.346658 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:08:10.346699 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:08:10.346784 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:08:10.349762 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:08:10.351770 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:08:10.446939 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:08:10.453800 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:08:10.459950 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:08:10.465533 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:08:10.577836 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:08:10.590029 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:08:10.592290 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:08:10.599352 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:08:10.600822 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:08:10.644053 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:08:10.655805 ignition[928]: INFO : Ignition 2.19.0 Sep 13 00:08:10.655805 ignition[928]: INFO : Stage: mount Sep 13 00:08:10.657821 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:08:10.657821 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:08:10.657821 ignition[928]: INFO : mount: mount passed Sep 13 00:08:10.657821 ignition[928]: INFO : Ignition finished successfully Sep 13 00:08:10.663860 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:08:10.669159 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:08:10.681066 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:08:10.694080 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Sep 13 00:08:10.694128 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:08:10.694140 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:08:10.694998 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:08:10.698779 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:08:10.700607 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:08:10.730515 ignition[959]: INFO : Ignition 2.19.0 Sep 13 00:08:10.730515 ignition[959]: INFO : Stage: files Sep 13 00:08:10.732592 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:08:10.732592 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:08:10.736254 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:08:10.737789 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:08:10.737789 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:08:10.743471 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:08:10.745053 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:08:10.747043 unknown[959]: wrote ssh authorized keys file for user: core Sep 13 00:08:10.748280 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:08:10.749586 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:08:10.749586 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:08:10.796864 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:08:10.932298 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:08:10.932298 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:08:10.937008 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:08:10.937008 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:08:10.937008 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:08:10.937008 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:08:10.937008 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:08:10.937008 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:08:10.937008 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:08:10.937008 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:08:10.937008 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:08:10.937008 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:08:10.937008 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:08:10.937008 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:08:10.937008 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:08:11.240781 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:08:11.372043 systemd-networkd[776]: eth0: Gained IPv6LL Sep 13 00:08:12.009851 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:08:12.009851 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:08:12.085369 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:08:12.085369 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:08:12.085369 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:08:12.085369 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 00:08:12.085369 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:08:12.085369 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:08:12.085369 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 00:08:12.085369 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:08:12.164380 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:08:12.218706 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:08:12.220705 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:08:12.220705 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:08:12.220705 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:08:12.220705 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:08:12.220705 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:08:12.220705 ignition[959]: INFO : files: files passed Sep 13 00:08:12.220705 ignition[959]: INFO : Ignition finished successfully Sep 13 00:08:12.270604 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:08:12.284036 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:08:12.285239 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:08:12.299485 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 00:08:12.333578 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:08:12.333578 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:08:12.332884 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:08:12.339349 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:08:12.333057 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:08:12.344129 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:08:12.346857 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:08:12.357871 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:08:12.384886 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:08:12.385052 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:08:12.403537 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:08:12.405370 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:08:12.408385 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:08:12.417966 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:08:12.432959 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:08:12.446117 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:08:12.457226 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:08:12.460129 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:08:12.460493 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:08:12.461032 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:08:12.461223 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:08:12.469012 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:08:12.470204 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:08:12.472183 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:08:12.473458 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:08:12.475749 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:08:12.476272 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:08:12.476621 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:08:12.477149 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:08:12.477506 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:08:12.478032 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:08:12.478368 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:08:12.478562 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:08:12.492380 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:08:12.493848 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:08:12.496343 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:08:12.496544 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:08:12.499029 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:08:12.499219 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:08:12.515607 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:08:12.515828 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:08:12.518004 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:08:12.520012 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:08:12.520220 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:08:12.522606 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:08:12.524901 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:08:12.553016 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:08:12.553157 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:08:12.554943 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:08:12.555081 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:08:12.557381 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:08:12.557600 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:08:12.560124 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:08:12.560283 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:08:12.570960 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:08:12.572038 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:08:12.572211 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:08:12.575161 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:08:12.576174 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:08:12.576340 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:08:12.578959 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:08:12.579106 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:08:12.586245 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:08:12.586394 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:08:12.589651 ignition[1013]: INFO : Ignition 2.19.0 Sep 13 00:08:12.589651 ignition[1013]: INFO : Stage: umount Sep 13 00:08:12.589651 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:08:12.589651 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:08:12.594028 ignition[1013]: INFO : umount: umount passed Sep 13 00:08:12.594028 ignition[1013]: INFO : Ignition finished successfully Sep 13 00:08:12.594559 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:08:12.594747 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:08:12.596189 systemd[1]: Stopped target network.target - Network. Sep 13 00:08:12.597983 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:08:12.598073 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:08:12.600154 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:08:12.600210 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:08:12.602302 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:08:12.602382 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:08:12.604335 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:08:12.604405 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:08:12.606585 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:08:12.608527 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:08:12.612860 systemd-networkd[776]: eth0: DHCPv6 lease lost Sep 13 00:08:12.624493 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:08:12.624763 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:08:12.627978 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:08:12.628661 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:08:12.629036 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:08:12.634438 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:08:12.634517 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:08:12.652044 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:08:12.654436 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:08:12.654568 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:08:12.657482 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:08:12.670079 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:08:12.672133 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:08:12.672188 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:08:12.676704 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:08:12.676797 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:08:12.680973 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:08:12.696803 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:08:12.697969 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:08:12.701158 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:08:12.721890 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:08:12.725005 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:08:12.726105 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:08:12.728260 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:08:12.728322 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:08:12.731226 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:08:12.731294 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:08:12.734369 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:08:12.734434 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:08:12.737594 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:08:12.738591 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:08:12.754043 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:08:12.755317 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:08:12.755408 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:08:12.757938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:08:12.758011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:08:12.764241 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:08:12.764394 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:08:13.335518 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:08:13.335751 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:08:13.363955 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:08:13.366223 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:08:13.366314 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:08:13.382986 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:08:13.395757 systemd[1]: Switching root. Sep 13 00:08:13.479943 systemd-journald[193]: Journal stopped Sep 13 00:08:15.109981 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 13 00:08:15.110061 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:08:15.110076 kernel: SELinux: policy capability open_perms=1 Sep 13 00:08:15.110091 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:08:15.110103 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:08:15.110115 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:08:15.110126 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:08:15.110144 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:08:15.110156 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:08:15.110168 kernel: audit: type=1403 audit(1757722094.001:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:08:15.110188 systemd[1]: Successfully loaded SELinux policy in 71.186ms. Sep 13 00:08:15.110214 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.351ms. Sep 13 00:08:15.110231 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:08:15.110243 systemd[1]: Detected virtualization kvm. Sep 13 00:08:15.110263 systemd[1]: Detected architecture x86-64. Sep 13 00:08:15.110275 systemd[1]: Detected first boot. Sep 13 00:08:15.110287 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:08:15.110300 zram_generator::config[1057]: No configuration found. Sep 13 00:08:15.110319 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:08:15.110332 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:08:15.110347 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:08:15.110359 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:08:15.110372 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:08:15.110385 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:08:15.110402 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:08:15.110415 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:08:15.110427 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:08:15.110440 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:08:15.110455 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:08:15.110467 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:08:15.110480 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:08:15.110494 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:08:15.110506 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:08:15.110519 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:08:15.110532 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:08:15.110544 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:08:15.110557 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:08:15.110572 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:08:15.110584 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:08:15.110605 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:08:15.110618 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:08:15.110631 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:08:15.110644 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:08:15.110656 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:08:15.110669 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:08:15.110689 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:08:15.110702 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:08:15.111323 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:08:15.111343 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:08:15.111356 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:08:15.111369 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:08:15.111381 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:08:15.111395 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:08:15.111408 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:08:15.111428 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:08:15.111441 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:08:15.111453 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:08:15.111465 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:08:15.111478 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:08:15.111490 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:08:15.111503 systemd[1]: Reached target machines.target - Containers. Sep 13 00:08:15.111515 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:08:15.111528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:08:15.111544 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:08:15.111557 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:08:15.111569 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:08:15.111581 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:08:15.111602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:08:15.111615 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:08:15.111627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:08:15.111639 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:08:15.111654 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:08:15.111667 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:08:15.111680 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:08:15.111692 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:08:15.111704 kernel: loop: module loaded Sep 13 00:08:15.111730 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:08:15.111742 kernel: fuse: init (API version 7.39) Sep 13 00:08:15.111755 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:08:15.111767 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:08:15.111783 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:08:15.111795 kernel: ACPI: bus type drm_connector registered Sep 13 00:08:15.111828 systemd-journald[1120]: Collecting audit messages is disabled. Sep 13 00:08:15.111851 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:08:15.111864 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:08:15.111876 systemd-journald[1120]: Journal started Sep 13 00:08:15.111971 systemd-journald[1120]: Runtime Journal (/run/log/journal/a1f2d0c97be14c2081ee32fe96622315) is 6.0M, max 48.3M, 42.2M free. Sep 13 00:08:14.746214 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:08:14.769379 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:08:14.769981 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:08:15.112745 systemd[1]: Stopped verity-setup.service. Sep 13 00:08:15.116755 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:08:15.147792 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:08:15.149635 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:08:15.150950 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:08:15.152246 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:08:15.153505 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:08:15.155004 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:08:15.156586 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:08:15.158132 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:08:15.160094 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:08:15.160359 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:08:15.162301 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:08:15.162734 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:08:15.166382 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:08:15.166708 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:08:15.168848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:08:15.169117 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:08:15.171075 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:08:15.171449 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:08:15.173796 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:08:15.174052 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:08:15.175737 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:08:15.177636 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:08:15.179917 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:08:15.199451 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:08:15.211996 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:08:15.215278 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:08:15.217616 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:08:15.217666 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:08:15.220218 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:08:15.223311 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:08:15.228421 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:08:15.230022 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:08:15.232479 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:08:15.240553 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:08:15.244347 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:08:15.251969 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:08:15.253370 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:08:15.257264 systemd-journald[1120]: Time spent on flushing to /var/log/journal/a1f2d0c97be14c2081ee32fe96622315 is 20.382ms for 989 entries. Sep 13 00:08:15.257264 systemd-journald[1120]: System Journal (/var/log/journal/a1f2d0c97be14c2081ee32fe96622315) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:08:16.102304 systemd-journald[1120]: Received client request to flush runtime journal. Sep 13 00:08:16.102390 kernel: loop0: detected capacity change from 0 to 140768 Sep 13 00:08:16.102426 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:08:16.102454 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:08:16.102544 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:08:15.257413 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:08:15.269366 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:08:15.277913 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:08:15.279525 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:08:15.280953 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:08:15.282569 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:08:15.293955 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:08:15.298149 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:08:15.301349 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:08:15.332609 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:08:15.360801 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:08:15.380920 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:08:15.424648 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:08:15.480292 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Sep 13 00:08:15.480308 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Sep 13 00:08:15.486454 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:08:15.637811 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:08:15.645099 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:08:15.660020 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:08:16.105538 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:08:16.167800 kernel: loop2: detected capacity change from 0 to 142488 Sep 13 00:08:16.267863 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:08:16.291754 kernel: loop3: detected capacity change from 0 to 140768 Sep 13 00:08:16.421750 kernel: loop4: detected capacity change from 0 to 221472 Sep 13 00:08:16.444752 kernel: loop5: detected capacity change from 0 to 142488 Sep 13 00:08:16.451365 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 00:08:16.452018 (sd-merge)[1196]: Merged extensions into '/usr'. Sep 13 00:08:16.456519 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:08:16.456543 systemd[1]: Reloading... Sep 13 00:08:16.521440 zram_generator::config[1222]: No configuration found. Sep 13 00:08:16.692675 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:08:16.744563 systemd[1]: Reloading finished in 287 ms. Sep 13 00:08:16.777022 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:08:16.794130 systemd[1]: Starting ensure-sysext.service... Sep 13 00:08:16.796759 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:08:16.993761 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:08:16.994152 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:08:16.995234 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:08:16.995553 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Sep 13 00:08:16.995646 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Sep 13 00:08:17.000141 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:08:17.000158 systemd-tmpfiles[1260]: Skipping /boot Sep 13 00:08:17.001828 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:08:17.001848 systemd[1]: Reloading... Sep 13 00:08:17.014963 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:08:17.014985 systemd-tmpfiles[1260]: Skipping /boot Sep 13 00:08:17.089868 zram_generator::config[1288]: No configuration found. Sep 13 00:08:17.210919 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:08:17.265929 systemd[1]: Reloading finished in 263 ms. Sep 13 00:08:17.287200 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:08:17.288729 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:08:17.290542 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:08:17.302375 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:08:17.314013 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:08:17.317282 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:08:17.320380 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:08:17.327257 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:08:17.332868 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:08:17.337871 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:08:17.345230 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:08:17.345504 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:08:17.347655 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:08:17.350939 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:08:17.358194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:08:17.359760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:08:17.364965 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:08:17.367371 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:08:17.369122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:08:17.369389 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:08:17.371474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:08:17.371826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:08:17.374105 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:08:17.374420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:08:17.392660 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Sep 13 00:08:17.392852 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:08:17.397952 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:08:17.398408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:08:17.411448 augenrules[1358]: No rules Sep 13 00:08:17.415510 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:08:17.421134 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:08:17.438335 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:08:17.439970 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:08:17.442307 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:08:17.444949 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:08:17.446378 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:08:17.449004 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:08:17.452322 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:08:17.457827 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:08:17.462492 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:08:17.462792 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:08:17.465115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:08:17.465381 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:08:17.468471 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:08:17.468968 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:08:17.489023 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:08:17.489284 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:08:17.496001 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:08:17.500890 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:08:17.503138 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:08:17.505890 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:08:17.507144 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:08:17.510919 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:08:17.512838 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:08:17.514839 systemd[1]: Finished ensure-sysext.service. Sep 13 00:08:17.516237 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:08:17.518355 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:08:17.521632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:08:17.522035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:08:17.528503 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:08:17.528810 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:08:17.542855 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:08:17.543182 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:08:17.565022 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:08:17.565302 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:08:17.565975 systemd-resolved[1332]: Positive Trust Anchors: Sep 13 00:08:17.565999 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:08:17.566041 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:08:17.570227 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:08:17.570416 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:08:17.577290 systemd-resolved[1332]: Defaulting to hostname 'linux'. Sep 13 00:08:17.580128 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:08:17.584774 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1389) Sep 13 00:08:17.584920 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:08:17.585167 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:08:17.588677 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:08:17.588750 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:08:17.651924 systemd-networkd[1398]: lo: Link UP Sep 13 00:08:17.652347 systemd-networkd[1398]: lo: Gained carrier Sep 13 00:08:17.654409 systemd-networkd[1398]: Enumeration completed Sep 13 00:08:17.654997 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:08:17.655073 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:08:17.655553 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:08:17.656451 systemd-networkd[1398]: eth0: Link UP Sep 13 00:08:17.656534 systemd-networkd[1398]: eth0: Gained carrier Sep 13 00:08:17.656593 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:08:17.657751 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:08:17.659544 systemd[1]: Reached target network.target - Network. Sep 13 00:08:17.693377 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 13 00:08:17.693791 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:08:17.693994 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:08:17.695492 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:08:17.695395 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:08:17.699183 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:08:17.703849 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:08:17.707241 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:08:17.718532 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:08:17.720427 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:08:18.137150 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 13 00:08:18.137063 systemd-resolved[1332]: Clock change detected. Flushing caches. Sep 13 00:08:18.137070 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:08:18.137116 systemd-timesyncd[1408]: Initial clock synchronization to Sat 2025-09-13 00:08:18.136943 UTC. Sep 13 00:08:18.142061 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:08:18.162124 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:08:18.172050 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:08:18.202358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:08:18.329215 kernel: kvm_amd: TSC scaling supported Sep 13 00:08:18.329304 kernel: kvm_amd: Nested Virtualization enabled Sep 13 00:08:18.329349 kernel: kvm_amd: Nested Paging enabled Sep 13 00:08:18.330516 kernel: kvm_amd: LBR virtualization supported Sep 13 00:08:18.330551 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 00:08:18.331259 kernel: kvm_amd: Virtual GIF supported Sep 13 00:08:18.360298 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:08:18.377769 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:08:18.393778 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:08:18.405552 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:08:18.418480 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:08:18.478074 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:08:18.479942 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:08:18.481266 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:08:18.482664 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:08:18.484176 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:08:18.486244 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:08:18.487685 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:08:18.489628 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:08:18.491082 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:08:18.491122 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:08:18.492148 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:08:18.494402 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:08:18.497494 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:08:18.505393 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:08:18.508299 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:08:18.510158 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:08:18.511490 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:08:18.512533 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:08:18.513562 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:08:18.513601 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:08:18.514846 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:08:18.519174 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:08:18.522696 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:08:18.523411 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:08:18.527177 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:08:18.528458 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:08:18.531831 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:08:18.537190 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:08:18.540272 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:08:18.546212 jq[1441]: false Sep 13 00:08:18.545974 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:08:18.553244 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:08:18.555452 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:08:18.556538 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:08:18.560194 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:08:18.568143 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:08:18.568826 extend-filesystems[1442]: Found loop3 Sep 13 00:08:18.572021 extend-filesystems[1442]: Found loop4 Sep 13 00:08:18.572021 extend-filesystems[1442]: Found loop5 Sep 13 00:08:18.572021 extend-filesystems[1442]: Found sr0 Sep 13 00:08:18.572021 extend-filesystems[1442]: Found vda Sep 13 00:08:18.572021 extend-filesystems[1442]: Found vda1 Sep 13 00:08:18.572021 extend-filesystems[1442]: Found vda2 Sep 13 00:08:18.572021 extend-filesystems[1442]: Found vda3 Sep 13 00:08:18.572021 extend-filesystems[1442]: Found usr Sep 13 00:08:18.572021 extend-filesystems[1442]: Found vda4 Sep 13 00:08:18.572021 extend-filesystems[1442]: Found vda6 Sep 13 00:08:18.572021 extend-filesystems[1442]: Found vda7 Sep 13 00:08:18.572021 extend-filesystems[1442]: Found vda9 Sep 13 00:08:18.572021 extend-filesystems[1442]: Checking size of /dev/vda9 Sep 13 00:08:18.572152 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:08:18.577026 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:08:18.577320 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:08:18.578926 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:08:18.579258 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:08:18.585968 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:08:18.588014 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:08:18.596083 extend-filesystems[1442]: Resized partition /dev/vda9 Sep 13 00:08:18.597550 jq[1454]: true Sep 13 00:08:18.599834 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:08:18.599198 dbus-daemon[1440]: [system] SELinux support is enabled Sep 13 00:08:18.609040 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1397) Sep 13 00:08:18.609832 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:08:18.616167 update_engine[1451]: I20250913 00:08:18.614233 1451 main.cc:92] Flatcar Update Engine starting Sep 13 00:08:18.616167 update_engine[1451]: I20250913 00:08:18.615834 1451 update_check_scheduler.cc:74] Next update check in 6m26s Sep 13 00:08:18.621019 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:08:18.630673 jq[1471]: true Sep 13 00:08:18.649603 tar[1461]: linux-amd64/helm Sep 13 00:08:18.649181 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:08:18.677385 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:08:18.661752 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:08:18.663346 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:08:18.663372 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:08:18.664847 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:08:18.664865 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:08:18.677822 systemd-logind[1449]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 00:08:18.677857 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:08:18.679250 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:08:18.679316 systemd-logind[1449]: New seat seat0. Sep 13 00:08:18.681840 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:08:18.681840 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:08:18.681840 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:08:18.683836 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:08:18.687856 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Sep 13 00:08:18.694892 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:08:18.694871 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:08:18.695264 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:08:18.700945 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:08:18.712033 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:08:18.711341 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:08:18.721718 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:08:18.744825 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:08:18.756121 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:08:18.765568 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:08:18.765904 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:08:18.775218 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:08:18.793040 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:08:18.802714 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:08:18.806579 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:08:18.808380 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:08:18.894276 containerd[1474]: time="2025-09-13T00:08:18.894172258Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:08:18.919813 containerd[1474]: time="2025-09-13T00:08:18.919737221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:08:18.921706 containerd[1474]: time="2025-09-13T00:08:18.921671498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:08:18.921706 containerd[1474]: time="2025-09-13T00:08:18.921698199Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:08:18.921764 containerd[1474]: time="2025-09-13T00:08:18.921714199Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:08:18.921963 containerd[1474]: time="2025-09-13T00:08:18.921944841Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:08:18.922005 containerd[1474]: time="2025-09-13T00:08:18.921972132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:08:18.922097 containerd[1474]: time="2025-09-13T00:08:18.922079413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:08:18.922118 containerd[1474]: time="2025-09-13T00:08:18.922097477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:08:18.922518 containerd[1474]: time="2025-09-13T00:08:18.922486477Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:08:18.922553 containerd[1474]: time="2025-09-13T00:08:18.922538414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:08:18.922574 containerd[1474]: time="2025-09-13T00:08:18.922555516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:08:18.922593 containerd[1474]: time="2025-09-13T00:08:18.922570895Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:08:18.922693 containerd[1474]: time="2025-09-13T00:08:18.922677986Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:08:18.923536 containerd[1474]: time="2025-09-13T00:08:18.922987777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:08:18.923536 containerd[1474]: time="2025-09-13T00:08:18.923177042Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:08:18.923536 containerd[1474]: time="2025-09-13T00:08:18.923193783Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:08:18.923536 containerd[1474]: time="2025-09-13T00:08:18.923456406Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:08:18.923664 containerd[1474]: time="2025-09-13T00:08:18.923541135Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:08:18.935186 containerd[1474]: time="2025-09-13T00:08:18.935134426Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:08:18.935261 containerd[1474]: time="2025-09-13T00:08:18.935205910Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:08:18.935261 containerd[1474]: time="2025-09-13T00:08:18.935225297Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:08:18.935261 containerd[1474]: time="2025-09-13T00:08:18.935249472Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:08:18.935333 containerd[1474]: time="2025-09-13T00:08:18.935267947Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:08:18.935480 containerd[1474]: time="2025-09-13T00:08:18.935452974Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:08:18.935788 containerd[1474]: time="2025-09-13T00:08:18.935748277Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:08:18.935921 containerd[1474]: time="2025-09-13T00:08:18.935899321Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:08:18.935948 containerd[1474]: time="2025-09-13T00:08:18.935927303Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:08:18.935981 containerd[1474]: time="2025-09-13T00:08:18.935945417Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:08:18.935981 containerd[1474]: time="2025-09-13T00:08:18.935963972Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:08:18.936047 containerd[1474]: time="2025-09-13T00:08:18.935981204Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:08:18.936047 containerd[1474]: time="2025-09-13T00:08:18.936022903Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:08:18.936084 containerd[1474]: time="2025-09-13T00:08:18.936050895Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:08:18.936103 containerd[1474]: time="2025-09-13T00:08:18.936077775Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:08:18.936103 containerd[1474]: time="2025-09-13T00:08:18.936095198Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:08:18.936164 containerd[1474]: time="2025-09-13T00:08:18.936111008Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:08:18.936164 containerd[1474]: time="2025-09-13T00:08:18.936124523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:08:18.936228 containerd[1474]: time="2025-09-13T00:08:18.936176551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936228 containerd[1474]: time="2025-09-13T00:08:18.936194594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936228 containerd[1474]: time="2025-09-13T00:08:18.936209272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936228 containerd[1474]: time="2025-09-13T00:08:18.936223068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936322 containerd[1474]: time="2025-09-13T00:08:18.936237004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936322 containerd[1474]: time="2025-09-13T00:08:18.936260849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936322 containerd[1474]: time="2025-09-13T00:08:18.936278151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936322 containerd[1474]: time="2025-09-13T00:08:18.936301144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936394 containerd[1474]: time="2025-09-13T00:08:18.936324238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936394 containerd[1474]: time="2025-09-13T00:08:18.936341991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936394 containerd[1474]: time="2025-09-13T00:08:18.936357650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936394 containerd[1474]: time="2025-09-13T00:08:18.936386424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936467 containerd[1474]: time="2025-09-13T00:08:18.936403206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936467 containerd[1474]: time="2025-09-13T00:08:18.936422452Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:08:18.936467 containerd[1474]: time="2025-09-13T00:08:18.936445996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936467 containerd[1474]: time="2025-09-13T00:08:18.936460483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936543 containerd[1474]: time="2025-09-13T00:08:18.936474890Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:08:18.936563 containerd[1474]: time="2025-09-13T00:08:18.936539942Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:08:18.936582 containerd[1474]: time="2025-09-13T00:08:18.936559789Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:08:18.936582 containerd[1474]: time="2025-09-13T00:08:18.936573174Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:08:18.936647 containerd[1474]: time="2025-09-13T00:08:18.936590236Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:08:18.936647 containerd[1474]: time="2025-09-13T00:08:18.936603121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.936647 containerd[1474]: time="2025-09-13T00:08:18.936619040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:08:18.936647 containerd[1474]: time="2025-09-13T00:08:18.936631223Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:08:18.936755 containerd[1474]: time="2025-09-13T00:08:18.936643837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:08:18.937102 containerd[1474]: time="2025-09-13T00:08:18.937021485Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:08:18.937102 containerd[1474]: time="2025-09-13T00:08:18.937094041Z" level=info msg="Connect containerd service" Sep 13 00:08:18.937307 containerd[1474]: time="2025-09-13T00:08:18.937134467Z" level=info msg="using legacy CRI server" Sep 13 00:08:18.937307 containerd[1474]: time="2025-09-13T00:08:18.937144215Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:08:18.937307 containerd[1474]: time="2025-09-13T00:08:18.937274199Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:08:18.938131 containerd[1474]: time="2025-09-13T00:08:18.938097393Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:08:18.938333 containerd[1474]: time="2025-09-13T00:08:18.938268293Z" level=info msg="Start subscribing containerd event" Sep 13 00:08:18.938379 containerd[1474]: time="2025-09-13T00:08:18.938352271Z" level=info msg="Start recovering state" Sep 13 00:08:18.938505 containerd[1474]: time="2025-09-13T00:08:18.938484789Z" level=info msg="Start event monitor" Sep 13 00:08:18.938543 containerd[1474]: time="2025-09-13T00:08:18.938489478Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:08:18.938543 containerd[1474]: time="2025-09-13T00:08:18.938530956Z" level=info msg="Start snapshots syncer" Sep 13 00:08:18.938596 containerd[1474]: time="2025-09-13T00:08:18.938545593Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:08:18.938596 containerd[1474]: time="2025-09-13T00:08:18.938556895Z" level=info msg="Start streaming server" Sep 13 00:08:18.938596 containerd[1474]: time="2025-09-13T00:08:18.938584917Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:08:18.938736 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:08:18.940180 containerd[1474]: time="2025-09-13T00:08:18.940145744Z" level=info msg="containerd successfully booted in 0.047107s" Sep 13 00:08:19.083090 tar[1461]: linux-amd64/LICENSE Sep 13 00:08:19.083220 tar[1461]: linux-amd64/README.md Sep 13 00:08:19.110203 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:08:19.849363 systemd-networkd[1398]: eth0: Gained IPv6LL Sep 13 00:08:19.853734 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:08:19.856643 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:08:19.868322 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 00:08:19.871311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:08:19.874128 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:08:19.895597 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:08:19.895893 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 00:08:19.897819 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:08:19.900562 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:08:20.636074 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:08:20.652530 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:51538.service - OpenSSH per-connection server daemon (10.0.0.1:51538). Sep 13 00:08:20.703817 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 51538 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:08:20.706765 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:20.716035 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:08:20.804632 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:08:20.808803 systemd-logind[1449]: New session 1 of user core. Sep 13 00:08:20.826048 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:08:20.838305 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:08:20.845448 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:20.990441 systemd[1554]: Queued start job for default target default.target. Sep 13 00:08:21.050557 systemd[1554]: Created slice app.slice - User Application Slice. Sep 13 00:08:21.050587 systemd[1554]: Reached target paths.target - Paths. Sep 13 00:08:21.050602 systemd[1554]: Reached target timers.target - Timers. Sep 13 00:08:21.052514 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:08:21.069173 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:08:21.069338 systemd[1554]: Reached target sockets.target - Sockets. Sep 13 00:08:21.069360 systemd[1554]: Reached target basic.target - Basic System. Sep 13 00:08:21.069406 systemd[1554]: Reached target default.target - Main User Target. Sep 13 00:08:21.069444 systemd[1554]: Startup finished in 215ms. Sep 13 00:08:21.069917 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:08:21.072842 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:08:21.145911 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:51542.service - OpenSSH per-connection server daemon (10.0.0.1:51542). Sep 13 00:08:21.204940 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 51542 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:08:21.207160 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:21.211850 systemd-logind[1449]: New session 2 of user core. Sep 13 00:08:21.220276 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:08:21.283315 sshd[1565]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:21.291755 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:51542.service: Deactivated successfully. Sep 13 00:08:21.294283 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:08:21.296313 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:08:21.301294 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:51544.service - OpenSSH per-connection server daemon (10.0.0.1:51544). Sep 13 00:08:21.303830 systemd-logind[1449]: Removed session 2. Sep 13 00:08:21.359614 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 51544 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:08:21.362024 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:21.368279 systemd-logind[1449]: New session 3 of user core. Sep 13 00:08:21.375155 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:08:21.437639 sshd[1572]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:21.460121 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:51544.service: Deactivated successfully. Sep 13 00:08:21.463879 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:08:21.466499 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:08:21.470758 systemd-logind[1449]: Removed session 3. Sep 13 00:08:21.485753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:08:21.487891 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:08:21.490119 systemd[1]: Startup finished in 1.138s (kernel) + 7.244s (initrd) + 7.143s (userspace) = 15.527s. Sep 13 00:08:21.494446 (kubelet)[1583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:08:22.651789 kubelet[1583]: E0913 00:08:22.651627 1583 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:08:22.655839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:08:22.656095 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:08:22.656506 systemd[1]: kubelet.service: Consumed 1.864s CPU time. Sep 13 00:08:31.450943 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:45780.service - OpenSSH per-connection server daemon (10.0.0.1:45780). Sep 13 00:08:31.493105 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 45780 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:08:31.495168 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:31.500244 systemd-logind[1449]: New session 4 of user core. Sep 13 00:08:31.510156 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:08:31.566012 sshd[1596]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:31.585657 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:45780.service: Deactivated successfully. Sep 13 00:08:31.588010 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:08:31.590289 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:08:31.605567 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:45794.service - OpenSSH per-connection server daemon (10.0.0.1:45794). Sep 13 00:08:31.607019 systemd-logind[1449]: Removed session 4. Sep 13 00:08:31.640299 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 45794 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:08:31.642384 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:31.646498 systemd-logind[1449]: New session 5 of user core. Sep 13 00:08:31.662172 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:08:31.713394 sshd[1603]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:31.730681 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:45794.service: Deactivated successfully. Sep 13 00:08:31.732416 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:08:31.734129 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:08:31.743332 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:45798.service - OpenSSH per-connection server daemon (10.0.0.1:45798). Sep 13 00:08:31.744793 systemd-logind[1449]: Removed session 5. Sep 13 00:08:31.778516 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 45798 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:08:31.780093 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:31.784766 systemd-logind[1449]: New session 6 of user core. Sep 13 00:08:31.802139 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:08:31.857631 sshd[1610]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:31.867925 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:45798.service: Deactivated successfully. Sep 13 00:08:31.869884 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:08:31.871215 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:08:31.872496 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:45802.service - OpenSSH per-connection server daemon (10.0.0.1:45802). Sep 13 00:08:31.873202 systemd-logind[1449]: Removed session 6. Sep 13 00:08:31.924428 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 45802 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:08:31.926066 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:31.930290 systemd-logind[1449]: New session 7 of user core. Sep 13 00:08:31.941135 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:08:32.001833 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:08:32.002222 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:08:32.021590 sudo[1620]: pam_unix(sudo:session): session closed for user root Sep 13 00:08:32.023877 sshd[1617]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:32.034223 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:45802.service: Deactivated successfully. Sep 13 00:08:32.036342 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:08:32.038255 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:08:32.047345 systemd[1]: Started sshd@7-10.0.0.89:22-10.0.0.1:45808.service - OpenSSH per-connection server daemon (10.0.0.1:45808). Sep 13 00:08:32.048439 systemd-logind[1449]: Removed session 7. Sep 13 00:08:32.086363 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 45808 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:08:32.088312 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:32.092761 systemd-logind[1449]: New session 8 of user core. Sep 13 00:08:32.102163 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:08:32.157859 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:08:32.158235 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:08:32.162896 sudo[1629]: pam_unix(sudo:session): session closed for user root Sep 13 00:08:32.171206 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:08:32.171625 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:08:32.191244 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:08:32.193506 auditctl[1632]: No rules Sep 13 00:08:32.194881 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:08:32.195199 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:08:32.197174 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:08:32.246106 augenrules[1650]: No rules Sep 13 00:08:32.248267 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:08:32.249735 sudo[1628]: pam_unix(sudo:session): session closed for user root Sep 13 00:08:32.251818 sshd[1625]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:32.264804 systemd[1]: sshd@7-10.0.0.89:22-10.0.0.1:45808.service: Deactivated successfully. Sep 13 00:08:32.267055 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:08:32.269216 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:08:32.279394 systemd[1]: Started sshd@8-10.0.0.89:22-10.0.0.1:45818.service - OpenSSH per-connection server daemon (10.0.0.1:45818). Sep 13 00:08:32.280609 systemd-logind[1449]: Removed session 8. Sep 13 00:08:32.318211 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 45818 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:08:32.320360 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:32.325271 systemd-logind[1449]: New session 9 of user core. Sep 13 00:08:32.340170 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:08:32.397506 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:08:32.397918 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:08:32.870906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:08:32.895309 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:08:33.318769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:08:33.323265 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:08:33.384863 kubelet[1685]: E0913 00:08:33.384790 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:08:33.392501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:08:33.392834 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:08:33.471381 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:08:33.471519 (dockerd)[1696]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:08:33.955945 dockerd[1696]: time="2025-09-13T00:08:33.955862934Z" level=info msg="Starting up" Sep 13 00:08:35.404688 dockerd[1696]: time="2025-09-13T00:08:35.404608449Z" level=info msg="Loading containers: start." Sep 13 00:08:35.745021 kernel: Initializing XFRM netlink socket Sep 13 00:08:35.872572 systemd-networkd[1398]: docker0: Link UP Sep 13 00:08:36.301189 dockerd[1696]: time="2025-09-13T00:08:36.301036662Z" level=info msg="Loading containers: done." Sep 13 00:08:36.377373 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3667852311-merged.mount: Deactivated successfully. Sep 13 00:08:36.571406 dockerd[1696]: time="2025-09-13T00:08:36.571220182Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:08:36.571823 dockerd[1696]: time="2025-09-13T00:08:36.571407343Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:08:36.571823 dockerd[1696]: time="2025-09-13T00:08:36.571592501Z" level=info msg="Daemon has completed initialization" Sep 13 00:08:36.625278 dockerd[1696]: time="2025-09-13T00:08:36.625174526Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:08:36.625503 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:08:38.133110 containerd[1474]: time="2025-09-13T00:08:38.133050552Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:08:39.492831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2876988081.mount: Deactivated successfully. Sep 13 00:08:40.756620 containerd[1474]: time="2025-09-13T00:08:40.756534811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:40.759126 containerd[1474]: time="2025-09-13T00:08:40.759086877Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 13 00:08:40.760522 containerd[1474]: time="2025-09-13T00:08:40.760478427Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:40.763535 containerd[1474]: time="2025-09-13T00:08:40.763491377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:40.764745 containerd[1474]: time="2025-09-13T00:08:40.764660269Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.631565614s" Sep 13 00:08:40.764745 containerd[1474]: time="2025-09-13T00:08:40.764734989Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:08:40.765836 containerd[1474]: time="2025-09-13T00:08:40.765768157Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:08:42.485848 containerd[1474]: time="2025-09-13T00:08:42.485786636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:42.486660 containerd[1474]: time="2025-09-13T00:08:42.486620149Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 13 00:08:42.488099 containerd[1474]: time="2025-09-13T00:08:42.488026867Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:42.491531 containerd[1474]: time="2025-09-13T00:08:42.491494189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:42.493027 containerd[1474]: time="2025-09-13T00:08:42.492957494Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.727124735s" Sep 13 00:08:42.493100 containerd[1474]: time="2025-09-13T00:08:42.493034458Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:08:42.494926 containerd[1474]: time="2025-09-13T00:08:42.494882313Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:08:43.620871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:08:43.630163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:08:43.850578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:08:43.857226 (kubelet)[1912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:08:43.992202 kubelet[1912]: E0913 00:08:43.991938 1912 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:08:43.996801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:08:43.997062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:08:44.662124 containerd[1474]: time="2025-09-13T00:08:44.662059043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:44.663303 containerd[1474]: time="2025-09-13T00:08:44.663264624Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 13 00:08:44.665589 containerd[1474]: time="2025-09-13T00:08:44.665548627Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:44.668610 containerd[1474]: time="2025-09-13T00:08:44.668557119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:44.669717 containerd[1474]: time="2025-09-13T00:08:44.669685756Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 2.174646589s" Sep 13 00:08:44.669791 containerd[1474]: time="2025-09-13T00:08:44.669720842Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:08:44.670284 containerd[1474]: time="2025-09-13T00:08:44.670259783Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:08:46.432818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3264384342.mount: Deactivated successfully. Sep 13 00:08:47.096306 containerd[1474]: time="2025-09-13T00:08:47.096229802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:47.097089 containerd[1474]: time="2025-09-13T00:08:47.097042727Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 13 00:08:47.102072 containerd[1474]: time="2025-09-13T00:08:47.102026623Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:47.104561 containerd[1474]: time="2025-09-13T00:08:47.104520801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:47.105492 containerd[1474]: time="2025-09-13T00:08:47.105451196Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.435144225s" Sep 13 00:08:47.105536 containerd[1474]: time="2025-09-13T00:08:47.105490379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:08:47.106163 containerd[1474]: time="2025-09-13T00:08:47.106094462Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:08:48.206437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2969073971.mount: Deactivated successfully. Sep 13 00:08:49.706524 containerd[1474]: time="2025-09-13T00:08:49.706452301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:49.707278 containerd[1474]: time="2025-09-13T00:08:49.707171450Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:08:49.712775 containerd[1474]: time="2025-09-13T00:08:49.712638462Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:49.716682 containerd[1474]: time="2025-09-13T00:08:49.716611533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:49.717908 containerd[1474]: time="2025-09-13T00:08:49.717827874Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.611686023s" Sep 13 00:08:49.717908 containerd[1474]: time="2025-09-13T00:08:49.717904338Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:08:49.718797 containerd[1474]: time="2025-09-13T00:08:49.718559276Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:08:51.104694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028454348.mount: Deactivated successfully. Sep 13 00:08:51.114643 containerd[1474]: time="2025-09-13T00:08:51.114577406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:51.115544 containerd[1474]: time="2025-09-13T00:08:51.115477538Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:08:51.116791 containerd[1474]: time="2025-09-13T00:08:51.116752250Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:51.119716 containerd[1474]: time="2025-09-13T00:08:51.119671176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:51.120539 containerd[1474]: time="2025-09-13T00:08:51.120478659Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.401850163s" Sep 13 00:08:51.120539 containerd[1474]: time="2025-09-13T00:08:51.120519879Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:08:51.121197 containerd[1474]: time="2025-09-13T00:08:51.121161243Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:08:51.815247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644612414.mount: Deactivated successfully. Sep 13 00:08:54.120808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:08:54.132264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:08:54.304008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:08:54.309206 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:08:54.846531 kubelet[2014]: E0913 00:08:54.846472 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:08:54.851056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:08:54.851311 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:08:57.579017 containerd[1474]: time="2025-09-13T00:08:57.578905340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:57.580146 containerd[1474]: time="2025-09-13T00:08:57.580093578Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 13 00:08:57.581480 containerd[1474]: time="2025-09-13T00:08:57.581437462Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:57.585249 containerd[1474]: time="2025-09-13T00:08:57.585160106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:57.586580 containerd[1474]: time="2025-09-13T00:08:57.586523147Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 6.465326597s" Sep 13 00:08:57.586580 containerd[1474]: time="2025-09-13T00:08:57.586561461Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:08:59.708335 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:08:59.722224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:08:59.755145 systemd[1]: Reloading requested from client PID 2090 ('systemctl') (unit session-9.scope)... Sep 13 00:08:59.755173 systemd[1]: Reloading... Sep 13 00:08:59.876080 zram_generator::config[2133]: No configuration found. Sep 13 00:09:00.116946 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:09:00.215470 systemd[1]: Reloading finished in 459 ms. Sep 13 00:09:00.265890 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:09:00.266034 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:09:00.266365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:09:00.282325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:09:00.451472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:09:00.456209 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:09:00.692705 kubelet[2177]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:09:00.692705 kubelet[2177]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:09:00.692705 kubelet[2177]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:09:00.693446 kubelet[2177]: I0913 00:09:00.692753 2177 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:09:01.060902 kubelet[2177]: I0913 00:09:01.060848 2177 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:09:01.060902 kubelet[2177]: I0913 00:09:01.060887 2177 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:09:01.061414 kubelet[2177]: I0913 00:09:01.061374 2177 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:09:01.082217 kubelet[2177]: E0913 00:09:01.082183 2177 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:09:01.083942 kubelet[2177]: I0913 00:09:01.083881 2177 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:09:01.091101 kubelet[2177]: E0913 00:09:01.091060 2177 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:09:01.091101 kubelet[2177]: I0913 00:09:01.091094 2177 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:09:01.097835 kubelet[2177]: I0913 00:09:01.097778 2177 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:09:01.097946 kubelet[2177]: I0913 00:09:01.097925 2177 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:09:01.098148 kubelet[2177]: I0913 00:09:01.098096 2177 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:09:01.098340 kubelet[2177]: I0913 00:09:01.098133 2177 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:09:01.098526 kubelet[2177]: I0913 00:09:01.098347 2177 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:09:01.098526 kubelet[2177]: I0913 00:09:01.098359 2177 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:09:01.098526 kubelet[2177]: I0913 00:09:01.098508 2177 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:09:01.101143 kubelet[2177]: I0913 00:09:01.101095 2177 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:09:01.101143 kubelet[2177]: I0913 00:09:01.101127 2177 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:09:01.101328 kubelet[2177]: I0913 00:09:01.101166 2177 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:09:01.101328 kubelet[2177]: I0913 00:09:01.101200 2177 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:09:01.109501 kubelet[2177]: I0913 00:09:01.109466 2177 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:09:01.110474 kubelet[2177]: I0913 00:09:01.110141 2177 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:09:01.110474 kubelet[2177]: W0913 00:09:01.110206 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Sep 13 00:09:01.110474 kubelet[2177]: E0913 00:09:01.110299 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:09:01.110811 kubelet[2177]: W0913 00:09:01.110780 2177 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:09:01.111164 kubelet[2177]: W0913 00:09:01.111117 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Sep 13 00:09:01.111312 kubelet[2177]: E0913 00:09:01.111286 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:09:01.113362 kubelet[2177]: I0913 00:09:01.113330 2177 server.go:1274] "Started kubelet" Sep 13 00:09:01.115231 kubelet[2177]: I0913 00:09:01.115018 2177 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:09:01.116733 kubelet[2177]: I0913 00:09:01.116706 2177 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:09:01.117854 kubelet[2177]: I0913 00:09:01.117830 2177 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:09:01.119867 kubelet[2177]: I0913 00:09:01.118473 2177 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:09:01.119867 kubelet[2177]: I0913 00:09:01.118455 2177 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:09:01.119867 kubelet[2177]: I0913 00:09:01.118829 2177 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:09:01.120435 kubelet[2177]: I0913 00:09:01.120394 2177 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:09:01.120591 kubelet[2177]: E0913 00:09:01.120519 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:09:01.121129 kubelet[2177]: I0913 00:09:01.121103 2177 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:09:01.121182 kubelet[2177]: I0913 00:09:01.121158 2177 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:09:01.122204 kubelet[2177]: E0913 00:09:01.122124 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="200ms" Sep 13 00:09:01.122284 kubelet[2177]: I0913 00:09:01.122211 2177 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:09:01.123310 kubelet[2177]: W0913 00:09:01.123213 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Sep 13 00:09:01.123370 kubelet[2177]: E0913 00:09:01.123315 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:09:01.123625 kubelet[2177]: I0913 00:09:01.123594 2177 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:09:01.124871 kubelet[2177]: E0913 00:09:01.122191 2177 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864aef97ae5cb24 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:09:01.113305892 +0000 UTC m=+0.648975241,LastTimestamp:2025-09-13 00:09:01.113305892 +0000 UTC m=+0.648975241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:09:01.125419 kubelet[2177]: I0913 00:09:01.125396 2177 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:09:01.127148 kubelet[2177]: E0913 00:09:01.127083 2177 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:09:01.142160 kubelet[2177]: I0913 00:09:01.142106 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:09:01.143889 kubelet[2177]: I0913 00:09:01.143869 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:09:01.143971 kubelet[2177]: I0913 00:09:01.143905 2177 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:09:01.143971 kubelet[2177]: I0913 00:09:01.143953 2177 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:09:01.144091 kubelet[2177]: E0913 00:09:01.144062 2177 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:09:01.145471 kubelet[2177]: I0913 00:09:01.145353 2177 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:09:01.145471 kubelet[2177]: I0913 00:09:01.145377 2177 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:09:01.145471 kubelet[2177]: I0913 00:09:01.145398 2177 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:09:01.147800 kubelet[2177]: W0913 00:09:01.147343 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Sep 13 00:09:01.147800 kubelet[2177]: E0913 00:09:01.147386 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:09:01.221492 kubelet[2177]: E0913 00:09:01.221425 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:09:01.244845 kubelet[2177]: E0913 00:09:01.244780 2177 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:09:01.322259 kubelet[2177]: E0913 00:09:01.322079 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:09:01.322719 kubelet[2177]: E0913 00:09:01.322669 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="400ms" Sep 13 00:09:01.423331 kubelet[2177]: E0913 00:09:01.423252 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:09:01.445459 kubelet[2177]: E0913 00:09:01.445401 2177 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:09:01.524356 kubelet[2177]: E0913 00:09:01.524310 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:09:01.624708 kubelet[2177]: E0913 00:09:01.624652 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:09:01.723830 kubelet[2177]: E0913 00:09:01.723759 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="800ms" Sep 13 00:09:01.724758 kubelet[2177]: E0913 00:09:01.724726 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:09:01.825421 kubelet[2177]: E0913 00:09:01.825343 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:09:01.845595 kubelet[2177]: E0913 00:09:01.845526 2177 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:09:01.926339 kubelet[2177]: E0913 00:09:01.926158 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:09:01.947880 kubelet[2177]: W0913 00:09:01.947804 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Sep 13 00:09:01.947880 kubelet[2177]: E0913 00:09:01.947868 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:09:01.984809 kubelet[2177]: I0913 00:09:01.984716 2177 policy_none.go:49] "None policy: Start" Sep 13 00:09:01.985707 kubelet[2177]: I0913 00:09:01.985653 2177 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:09:01.985707 kubelet[2177]: I0913 00:09:01.985710 2177 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:09:02.027194 kubelet[2177]: E0913 00:09:02.027154 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:09:02.103226 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:09:02.118522 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:09:02.122402 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:09:02.128171 kubelet[2177]: E0913 00:09:02.128134 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:09:02.129944 kubelet[2177]: I0913 00:09:02.129867 2177 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:09:02.130232 kubelet[2177]: I0913 00:09:02.130198 2177 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:09:02.130294 kubelet[2177]: I0913 00:09:02.130222 2177 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:09:02.130563 kubelet[2177]: I0913 00:09:02.130523 2177 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:09:02.131474 kubelet[2177]: E0913 00:09:02.131451 2177 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:09:02.233328 kubelet[2177]: I0913 00:09:02.233143 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:09:02.233687 kubelet[2177]: E0913 00:09:02.233625 2177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Sep 13 00:09:02.435449 kubelet[2177]: I0913 00:09:02.435399 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:09:02.435841 kubelet[2177]: E0913 00:09:02.435806 2177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Sep 13 00:09:02.451613 kubelet[2177]: W0913 00:09:02.451566 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Sep 13 00:09:02.451778 kubelet[2177]: E0913 00:09:02.451619 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:09:02.524818 kubelet[2177]: E0913 00:09:02.524669 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="1.6s" Sep 13 00:09:02.644754 kubelet[2177]: W0913 00:09:02.644654 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Sep 13 00:09:02.644754 kubelet[2177]: E0913 00:09:02.644731 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:09:02.655923 systemd[1]: Created slice kubepods-burstable-pod343a74e7caf5968aff257928f3a18690.slice - libcontainer container kubepods-burstable-pod343a74e7caf5968aff257928f3a18690.slice. Sep 13 00:09:02.679601 kubelet[2177]: W0913 00:09:02.679487 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Sep 13 00:09:02.679601 kubelet[2177]: E0913 00:09:02.679552 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:09:02.681198 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 13 00:09:02.692691 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 13 00:09:02.732352 kubelet[2177]: I0913 00:09:02.732251 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:09:02.732352 kubelet[2177]: I0913 00:09:02.732327 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:09:02.732352 kubelet[2177]: I0913 00:09:02.732357 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/343a74e7caf5968aff257928f3a18690-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"343a74e7caf5968aff257928f3a18690\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:09:02.732352 kubelet[2177]: I0913 00:09:02.732378 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/343a74e7caf5968aff257928f3a18690-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"343a74e7caf5968aff257928f3a18690\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:09:02.732976 kubelet[2177]: I0913 00:09:02.732401 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/343a74e7caf5968aff257928f3a18690-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"343a74e7caf5968aff257928f3a18690\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:09:02.732976 kubelet[2177]: I0913 00:09:02.732428 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:09:02.732976 kubelet[2177]: I0913 00:09:02.732449 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:09:02.732976 kubelet[2177]: I0913 00:09:02.732481 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:09:02.732976 kubelet[2177]: I0913 00:09:02.732556 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:09:02.837942 kubelet[2177]: I0913 00:09:02.837772 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:09:02.838337 kubelet[2177]: E0913 00:09:02.838280 2177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Sep 13 00:09:02.978259 kubelet[2177]: E0913 00:09:02.978206 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:02.979133 containerd[1474]: time="2025-09-13T00:09:02.979080414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:343a74e7caf5968aff257928f3a18690,Namespace:kube-system,Attempt:0,}" Sep 13 00:09:02.990448 kubelet[2177]: E0913 00:09:02.990388 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:02.990955 containerd[1474]: time="2025-09-13T00:09:02.990911173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:09:02.995307 kubelet[2177]: E0913 00:09:02.995274 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:02.995821 containerd[1474]: time="2025-09-13T00:09:02.995775823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:09:03.191592 kubelet[2177]: E0913 00:09:03.191517 2177 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:09:03.640383 kubelet[2177]: I0913 00:09:03.640335 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:09:03.640733 kubelet[2177]: E0913 00:09:03.640704 2177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Sep 13 00:09:03.668016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1213325148.mount: Deactivated successfully. Sep 13 00:09:03.675659 containerd[1474]: time="2025-09-13T00:09:03.675585623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:09:03.676822 containerd[1474]: time="2025-09-13T00:09:03.676775261Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:09:03.677623 containerd[1474]: time="2025-09-13T00:09:03.677598895Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:09:03.678485 containerd[1474]: time="2025-09-13T00:09:03.678446013Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:09:03.679588 containerd[1474]: time="2025-09-13T00:09:03.679524589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:09:03.680571 containerd[1474]: time="2025-09-13T00:09:03.680496824Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:09:03.681658 containerd[1474]: time="2025-09-13T00:09:03.681613213Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:09:03.686435 containerd[1474]: time="2025-09-13T00:09:03.686366303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:09:03.687466 containerd[1474]: time="2025-09-13T00:09:03.687412318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 708.242244ms" Sep 13 00:09:03.689244 containerd[1474]: time="2025-09-13T00:09:03.689197426Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 693.333897ms" Sep 13 00:09:03.690627 containerd[1474]: time="2025-09-13T00:09:03.690598594Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 699.608531ms" Sep 13 00:09:03.898777 containerd[1474]: time="2025-09-13T00:09:03.898596564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:03.899720 containerd[1474]: time="2025-09-13T00:09:03.899581523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:03.899830 containerd[1474]: time="2025-09-13T00:09:03.899803434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:03.900154 containerd[1474]: time="2025-09-13T00:09:03.900124393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:03.902414 containerd[1474]: time="2025-09-13T00:09:03.901222137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:03.902414 containerd[1474]: time="2025-09-13T00:09:03.901277562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:03.902414 containerd[1474]: time="2025-09-13T00:09:03.901289404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:03.902414 containerd[1474]: time="2025-09-13T00:09:03.901364647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:03.903117 containerd[1474]: time="2025-09-13T00:09:03.903026220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:03.903117 containerd[1474]: time="2025-09-13T00:09:03.903089650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:03.903228 containerd[1474]: time="2025-09-13T00:09:03.903104359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:03.903228 containerd[1474]: time="2025-09-13T00:09:03.903189350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:03.963953 update_engine[1451]: I20250913 00:09:03.963828 1451 update_attempter.cc:509] Updating boot flags... Sep 13 00:09:03.969361 systemd[1]: Started cri-containerd-58f40880c98f73deeda272d8125f1820f794bf8e0fb49e8cc1baa4a2c7b3b3ff.scope - libcontainer container 58f40880c98f73deeda272d8125f1820f794bf8e0fb49e8cc1baa4a2c7b3b3ff. Sep 13 00:09:03.976049 systemd[1]: Started cri-containerd-dd5938782ef9822d5c142bc50a6076662ebbbee6729187d2e4a5f4b3606565a7.scope - libcontainer container dd5938782ef9822d5c142bc50a6076662ebbbee6729187d2e4a5f4b3606565a7. Sep 13 00:09:03.995890 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2307) Sep 13 00:09:03.995205 systemd[1]: Started cri-containerd-577a5c763e631928a12e24e25f12e1f0fa608f293498c6c3fa16536c275b4560.scope - libcontainer container 577a5c763e631928a12e24e25f12e1f0fa608f293498c6c3fa16536c275b4560. Sep 13 00:09:04.120095 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2310) Sep 13 00:09:04.125590 kubelet[2177]: E0913 00:09:04.125541 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="3.2s" Sep 13 00:09:04.151594 containerd[1474]: time="2025-09-13T00:09:04.151465256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"577a5c763e631928a12e24e25f12e1f0fa608f293498c6c3fa16536c275b4560\"" Sep 13 00:09:04.155931 containerd[1474]: time="2025-09-13T00:09:04.155889396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:343a74e7caf5968aff257928f3a18690,Namespace:kube-system,Attempt:0,} returns sandbox id \"58f40880c98f73deeda272d8125f1820f794bf8e0fb49e8cc1baa4a2c7b3b3ff\"" Sep 13 00:09:04.158843 kubelet[2177]: E0913 00:09:04.158564 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:04.159406 kubelet[2177]: E0913 00:09:04.159204 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:04.160683 containerd[1474]: time="2025-09-13T00:09:04.160627631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd5938782ef9822d5c142bc50a6076662ebbbee6729187d2e4a5f4b3606565a7\"" Sep 13 00:09:04.163433 kubelet[2177]: E0913 00:09:04.163367 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:04.165527 containerd[1474]: time="2025-09-13T00:09:04.165493398Z" level=info msg="CreateContainer within sandbox \"58f40880c98f73deeda272d8125f1820f794bf8e0fb49e8cc1baa4a2c7b3b3ff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:09:04.165585 containerd[1474]: time="2025-09-13T00:09:04.165559032Z" level=info msg="CreateContainer within sandbox \"dd5938782ef9822d5c142bc50a6076662ebbbee6729187d2e4a5f4b3606565a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:09:04.167010 containerd[1474]: time="2025-09-13T00:09:04.165730728Z" level=info msg="CreateContainer within sandbox \"577a5c763e631928a12e24e25f12e1f0fa608f293498c6c3fa16536c275b4560\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:09:04.172075 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2310) Sep 13 00:09:04.272575 containerd[1474]: time="2025-09-13T00:09:04.272503858Z" level=info msg="CreateContainer within sandbox \"dd5938782ef9822d5c142bc50a6076662ebbbee6729187d2e4a5f4b3606565a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"efe9a656e4f2d98179f7daf373710e318ade314ec3f2601b346a3a04ec27a1b6\"" Sep 13 00:09:04.273325 containerd[1474]: time="2025-09-13T00:09:04.273299687Z" level=info msg="StartContainer for \"efe9a656e4f2d98179f7daf373710e318ade314ec3f2601b346a3a04ec27a1b6\"" Sep 13 00:09:04.276411 containerd[1474]: time="2025-09-13T00:09:04.276364889Z" level=info msg="CreateContainer within sandbox \"577a5c763e631928a12e24e25f12e1f0fa608f293498c6c3fa16536c275b4560\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1e850c19519ceb33a02d8d39bc3ac50f0a1719ab1d15e288312aaa1896b04dc4\"" Sep 13 00:09:04.276736 containerd[1474]: time="2025-09-13T00:09:04.276714844Z" level=info msg="StartContainer for \"1e850c19519ceb33a02d8d39bc3ac50f0a1719ab1d15e288312aaa1896b04dc4\"" Sep 13 00:09:04.277165 containerd[1474]: time="2025-09-13T00:09:04.277134269Z" level=info msg="CreateContainer within sandbox \"58f40880c98f73deeda272d8125f1820f794bf8e0fb49e8cc1baa4a2c7b3b3ff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"54da79cac224905945b028ef802d14586b2d8ea02414a260469184e30cae05e1\"" Sep 13 00:09:04.277495 containerd[1474]: time="2025-09-13T00:09:04.277469996Z" level=info msg="StartContainer for \"54da79cac224905945b028ef802d14586b2d8ea02414a260469184e30cae05e1\"" Sep 13 00:09:04.318139 systemd[1]: Started cri-containerd-1e850c19519ceb33a02d8d39bc3ac50f0a1719ab1d15e288312aaa1896b04dc4.scope - libcontainer container 1e850c19519ceb33a02d8d39bc3ac50f0a1719ab1d15e288312aaa1896b04dc4. Sep 13 00:09:04.319743 systemd[1]: Started cri-containerd-54da79cac224905945b028ef802d14586b2d8ea02414a260469184e30cae05e1.scope - libcontainer container 54da79cac224905945b028ef802d14586b2d8ea02414a260469184e30cae05e1. Sep 13 00:09:04.321776 systemd[1]: Started cri-containerd-efe9a656e4f2d98179f7daf373710e318ade314ec3f2601b346a3a04ec27a1b6.scope - libcontainer container efe9a656e4f2d98179f7daf373710e318ade314ec3f2601b346a3a04ec27a1b6. Sep 13 00:09:04.382276 containerd[1474]: time="2025-09-13T00:09:04.382191505Z" level=info msg="StartContainer for \"efe9a656e4f2d98179f7daf373710e318ade314ec3f2601b346a3a04ec27a1b6\" returns successfully" Sep 13 00:09:04.382690 containerd[1474]: time="2025-09-13T00:09:04.382208046Z" level=info msg="StartContainer for \"1e850c19519ceb33a02d8d39bc3ac50f0a1719ab1d15e288312aaa1896b04dc4\" returns successfully" Sep 13 00:09:04.382690 containerd[1474]: time="2025-09-13T00:09:04.382218095Z" level=info msg="StartContainer for \"54da79cac224905945b028ef802d14586b2d8ea02414a260469184e30cae05e1\" returns successfully" Sep 13 00:09:05.167550 kubelet[2177]: E0913 00:09:05.167499 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:05.169463 kubelet[2177]: E0913 00:09:05.169429 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:05.171432 kubelet[2177]: E0913 00:09:05.171385 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:05.244696 kubelet[2177]: I0913 00:09:05.244642 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:09:05.654947 kubelet[2177]: I0913 00:09:05.654893 2177 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:09:06.106850 kubelet[2177]: I0913 00:09:06.106691 2177 apiserver.go:52] "Watching apiserver" Sep 13 00:09:06.122197 kubelet[2177]: I0913 00:09:06.122141 2177 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:09:06.178076 kubelet[2177]: E0913 00:09:06.178020 2177 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:09:06.178613 kubelet[2177]: E0913 00:09:06.178145 2177 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:09:06.178613 kubelet[2177]: E0913 00:09:06.178201 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:06.178613 kubelet[2177]: E0913 00:09:06.178402 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:07.181940 kubelet[2177]: E0913 00:09:07.181883 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:08.175504 kubelet[2177]: E0913 00:09:08.175453 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:08.398106 systemd[1]: Reloading requested from client PID 2469 ('systemctl') (unit session-9.scope)... Sep 13 00:09:08.398124 systemd[1]: Reloading... Sep 13 00:09:08.486052 zram_generator::config[2514]: No configuration found. Sep 13 00:09:08.613782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:09:08.723608 systemd[1]: Reloading finished in 325 ms. Sep 13 00:09:08.768930 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:09:08.793590 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:09:08.793913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:09:08.793980 systemd[1]: kubelet.service: Consumed 1.292s CPU time, 136.2M memory peak, 0B memory swap peak. Sep 13 00:09:08.800358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:09:08.978029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:09:08.983674 (kubelet)[2553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:09:09.025873 kubelet[2553]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:09:09.025873 kubelet[2553]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:09:09.025873 kubelet[2553]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:09:09.025873 kubelet[2553]: I0913 00:09:09.025750 2553 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:09:09.033838 kubelet[2553]: I0913 00:09:09.033797 2553 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:09:09.033838 kubelet[2553]: I0913 00:09:09.033823 2553 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:09:09.034113 kubelet[2553]: I0913 00:09:09.034089 2553 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:09:09.035655 kubelet[2553]: I0913 00:09:09.035626 2553 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:09:09.037441 kubelet[2553]: I0913 00:09:09.037411 2553 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:09:09.042387 kubelet[2553]: E0913 00:09:09.042293 2553 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:09:09.042387 kubelet[2553]: I0913 00:09:09.042374 2553 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:09:09.049076 kubelet[2553]: I0913 00:09:09.048929 2553 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:09:09.049231 kubelet[2553]: I0913 00:09:09.049140 2553 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:09:09.049295 kubelet[2553]: I0913 00:09:09.049262 2553 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:09:09.049467 kubelet[2553]: I0913 00:09:09.049285 2553 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:09:09.049467 kubelet[2553]: I0913 00:09:09.049467 2553 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:09:09.049629 kubelet[2553]: I0913 00:09:09.049476 2553 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:09:09.049629 kubelet[2553]: I0913 00:09:09.049509 2553 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:09:09.049691 kubelet[2553]: I0913 00:09:09.049656 2553 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:09:09.049691 kubelet[2553]: I0913 00:09:09.049671 2553 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:09:09.050009 kubelet[2553]: I0913 00:09:09.049966 2553 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:09:09.050009 kubelet[2553]: I0913 00:09:09.049987 2553 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:09:09.053018 kubelet[2553]: I0913 00:09:09.051801 2553 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:09:09.053018 kubelet[2553]: I0913 00:09:09.052310 2553 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:09:09.053018 kubelet[2553]: I0913 00:09:09.052798 2553 server.go:1274] "Started kubelet" Sep 13 00:09:09.053270 kubelet[2553]: I0913 00:09:09.053242 2553 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:09:09.053398 kubelet[2553]: I0913 00:09:09.053376 2553 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:09:09.053935 kubelet[2553]: I0913 00:09:09.053905 2553 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:09:09.054727 kubelet[2553]: I0913 00:09:09.054668 2553 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:09:09.056576 kubelet[2553]: I0913 00:09:09.056561 2553 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:09:09.056680 kubelet[2553]: I0913 00:09:09.056668 2553 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:09:09.056988 kubelet[2553]: I0913 00:09:09.056967 2553 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:09:09.057567 kubelet[2553]: I0913 00:09:09.057552 2553 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:09:09.057746 kubelet[2553]: I0913 00:09:09.057734 2553 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:09:09.064619 kubelet[2553]: E0913 00:09:09.062597 2553 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:09:09.068250 kubelet[2553]: I0913 00:09:09.068189 2553 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:09:09.068334 kubelet[2553]: I0913 00:09:09.068306 2553 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:09:09.072041 kubelet[2553]: I0913 00:09:09.072017 2553 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:09:09.082073 kubelet[2553]: I0913 00:09:09.081986 2553 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:09:09.083473 kubelet[2553]: I0913 00:09:09.083448 2553 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:09:09.083473 kubelet[2553]: I0913 00:09:09.083474 2553 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:09:09.083559 kubelet[2553]: I0913 00:09:09.083498 2553 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:09:09.083610 kubelet[2553]: E0913 00:09:09.083560 2553 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:09:09.117033 kubelet[2553]: I0913 00:09:09.116980 2553 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:09:09.117033 kubelet[2553]: I0913 00:09:09.117017 2553 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:09:09.117033 kubelet[2553]: I0913 00:09:09.117040 2553 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:09:09.117258 kubelet[2553]: I0913 00:09:09.117196 2553 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:09:09.117258 kubelet[2553]: I0913 00:09:09.117206 2553 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:09:09.117258 kubelet[2553]: I0913 00:09:09.117227 2553 policy_none.go:49] "None policy: Start" Sep 13 00:09:09.118176 kubelet[2553]: I0913 00:09:09.118076 2553 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:09:09.118176 kubelet[2553]: I0913 00:09:09.118114 2553 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:09:09.118272 kubelet[2553]: I0913 00:09:09.118261 2553 state_mem.go:75] "Updated machine memory state" Sep 13 00:09:09.124147 kubelet[2553]: I0913 00:09:09.124099 2553 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:09:09.124388 kubelet[2553]: I0913 00:09:09.124361 2553 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:09:09.124441 kubelet[2553]: I0913 00:09:09.124387 2553 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:09:09.125724 kubelet[2553]: I0913 00:09:09.125692 2553 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:09:09.191864 kubelet[2553]: E0913 00:09:09.191798 2553 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:09:09.230675 kubelet[2553]: I0913 00:09:09.230512 2553 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:09:09.240263 kubelet[2553]: I0913 00:09:09.240213 2553 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:09:09.240427 kubelet[2553]: I0913 00:09:09.240330 2553 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:09:09.258878 kubelet[2553]: I0913 00:09:09.258818 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/343a74e7caf5968aff257928f3a18690-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"343a74e7caf5968aff257928f3a18690\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:09:09.259076 kubelet[2553]: I0913 00:09:09.258890 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:09:09.259076 kubelet[2553]: I0913 00:09:09.258922 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:09:09.259076 kubelet[2553]: I0913 00:09:09.258947 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/343a74e7caf5968aff257928f3a18690-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"343a74e7caf5968aff257928f3a18690\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:09:09.259076 kubelet[2553]: I0913 00:09:09.258967 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:09:09.259076 kubelet[2553]: I0913 00:09:09.258984 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:09:09.259225 kubelet[2553]: I0913 00:09:09.259021 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:09:09.259225 kubelet[2553]: I0913 00:09:09.259042 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:09:09.259225 kubelet[2553]: I0913 00:09:09.259063 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/343a74e7caf5968aff257928f3a18690-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"343a74e7caf5968aff257928f3a18690\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:09:09.490908 kubelet[2553]: E0913 00:09:09.490752 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:09.491791 kubelet[2553]: E0913 00:09:09.491768 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:09.493161 kubelet[2553]: E0913 00:09:09.493137 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:10.051809 kubelet[2553]: I0913 00:09:10.051754 2553 apiserver.go:52] "Watching apiserver" Sep 13 00:09:10.058302 kubelet[2553]: I0913 00:09:10.058263 2553 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:09:10.097232 kubelet[2553]: E0913 00:09:10.097174 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:10.097793 kubelet[2553]: E0913 00:09:10.097515 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:10.115489 kubelet[2553]: E0913 00:09:10.115434 2553 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:09:10.115687 kubelet[2553]: E0913 00:09:10.115667 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:10.116792 kubelet[2553]: I0913 00:09:10.116727 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.116713653 podStartE2EDuration="1.116713653s" podCreationTimestamp="2025-09-13 00:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:09:10.116456447 +0000 UTC m=+1.128617858" watchObservedRunningTime="2025-09-13 00:09:10.116713653 +0000 UTC m=+1.128875064" Sep 13 00:09:10.134133 kubelet[2553]: I0913 00:09:10.134056 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.134033791 podStartE2EDuration="1.134033791s" podCreationTimestamp="2025-09-13 00:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:09:10.133756217 +0000 UTC m=+1.145917628" watchObservedRunningTime="2025-09-13 00:09:10.134033791 +0000 UTC m=+1.146195212" Sep 13 00:09:10.172687 kubelet[2553]: I0913 00:09:10.168925 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.168899905 podStartE2EDuration="3.168899905s" podCreationTimestamp="2025-09-13 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:09:10.15098899 +0000 UTC m=+1.163150401" watchObservedRunningTime="2025-09-13 00:09:10.168899905 +0000 UTC m=+1.181061316" Sep 13 00:09:11.099582 kubelet[2553]: E0913 00:09:11.099529 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:12.271052 kubelet[2553]: E0913 00:09:12.270989 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:13.270114 kubelet[2553]: I0913 00:09:13.270076 2553 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:09:13.270488 containerd[1474]: time="2025-09-13T00:09:13.270443330Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:09:13.270837 kubelet[2553]: I0913 00:09:13.270766 2553 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:09:14.292504 systemd[1]: Created slice kubepods-besteffort-podc582a1b3_bc2d_4686_b9d8_d59db7a18d1e.slice - libcontainer container kubepods-besteffort-podc582a1b3_bc2d_4686_b9d8_d59db7a18d1e.slice. Sep 13 00:09:14.364247 systemd[1]: Created slice kubepods-besteffort-pod6e5b11ba_4d0d_4e3c_a715_4626c8759548.slice - libcontainer container kubepods-besteffort-pod6e5b11ba_4d0d_4e3c_a715_4626c8759548.slice. Sep 13 00:09:14.418936 kubelet[2553]: I0913 00:09:14.418860 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d9qg\" (UniqueName: \"kubernetes.io/projected/c582a1b3-bc2d-4686-b9d8-d59db7a18d1e-kube-api-access-9d9qg\") pod \"kube-proxy-s6vkh\" (UID: \"c582a1b3-bc2d-4686-b9d8-d59db7a18d1e\") " pod="kube-system/kube-proxy-s6vkh" Sep 13 00:09:14.418936 kubelet[2553]: I0913 00:09:14.418911 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c582a1b3-bc2d-4686-b9d8-d59db7a18d1e-kube-proxy\") pod \"kube-proxy-s6vkh\" (UID: \"c582a1b3-bc2d-4686-b9d8-d59db7a18d1e\") " pod="kube-system/kube-proxy-s6vkh" Sep 13 00:09:14.418936 kubelet[2553]: I0913 00:09:14.418934 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c582a1b3-bc2d-4686-b9d8-d59db7a18d1e-xtables-lock\") pod \"kube-proxy-s6vkh\" (UID: \"c582a1b3-bc2d-4686-b9d8-d59db7a18d1e\") " pod="kube-system/kube-proxy-s6vkh" Sep 13 00:09:14.418936 kubelet[2553]: I0913 00:09:14.418951 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c582a1b3-bc2d-4686-b9d8-d59db7a18d1e-lib-modules\") pod \"kube-proxy-s6vkh\" (UID: \"c582a1b3-bc2d-4686-b9d8-d59db7a18d1e\") " pod="kube-system/kube-proxy-s6vkh" Sep 13 00:09:14.519479 kubelet[2553]: I0913 00:09:14.519258 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd8gs\" (UniqueName: \"kubernetes.io/projected/6e5b11ba-4d0d-4e3c-a715-4626c8759548-kube-api-access-qd8gs\") pod \"tigera-operator-58fc44c59b-swnwz\" (UID: \"6e5b11ba-4d0d-4e3c-a715-4626c8759548\") " pod="tigera-operator/tigera-operator-58fc44c59b-swnwz" Sep 13 00:09:14.519479 kubelet[2553]: I0913 00:09:14.519324 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6e5b11ba-4d0d-4e3c-a715-4626c8759548-var-lib-calico\") pod \"tigera-operator-58fc44c59b-swnwz\" (UID: \"6e5b11ba-4d0d-4e3c-a715-4626c8759548\") " pod="tigera-operator/tigera-operator-58fc44c59b-swnwz" Sep 13 00:09:14.610887 kubelet[2553]: E0913 00:09:14.610722 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:14.611674 containerd[1474]: time="2025-09-13T00:09:14.611594403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s6vkh,Uid:c582a1b3-bc2d-4686-b9d8-d59db7a18d1e,Namespace:kube-system,Attempt:0,}" Sep 13 00:09:14.648533 containerd[1474]: time="2025-09-13T00:09:14.648423416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:14.648533 containerd[1474]: time="2025-09-13T00:09:14.648485864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:14.648533 containerd[1474]: time="2025-09-13T00:09:14.648496534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:14.648779 containerd[1474]: time="2025-09-13T00:09:14.648604247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:14.667670 containerd[1474]: time="2025-09-13T00:09:14.667618655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-swnwz,Uid:6e5b11ba-4d0d-4e3c-a715-4626c8759548,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:09:14.684240 systemd[1]: Started cri-containerd-23a123ab1a3b275f3492e950211d0aa85c180663726ccfaf6e939783b99bf3fb.scope - libcontainer container 23a123ab1a3b275f3492e950211d0aa85c180663726ccfaf6e939783b99bf3fb. Sep 13 00:09:14.714614 containerd[1474]: time="2025-09-13T00:09:14.714555716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s6vkh,Uid:c582a1b3-bc2d-4686-b9d8-d59db7a18d1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"23a123ab1a3b275f3492e950211d0aa85c180663726ccfaf6e939783b99bf3fb\"" Sep 13 00:09:14.715522 kubelet[2553]: E0913 00:09:14.715489 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:14.718360 containerd[1474]: time="2025-09-13T00:09:14.718178119Z" level=info msg="CreateContainer within sandbox \"23a123ab1a3b275f3492e950211d0aa85c180663726ccfaf6e939783b99bf3fb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:09:14.985800 containerd[1474]: time="2025-09-13T00:09:14.985708039Z" level=info msg="CreateContainer within sandbox \"23a123ab1a3b275f3492e950211d0aa85c180663726ccfaf6e939783b99bf3fb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fec5a130bb834260ba65ef7c6f070d3182435767f7caa203a6cf1fa095d238c3\"" Sep 13 00:09:14.987308 containerd[1474]: time="2025-09-13T00:09:14.987278672Z" level=info msg="StartContainer for \"fec5a130bb834260ba65ef7c6f070d3182435767f7caa203a6cf1fa095d238c3\"" Sep 13 00:09:15.000367 containerd[1474]: time="2025-09-13T00:09:14.999572603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:15.000367 containerd[1474]: time="2025-09-13T00:09:15.000326544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:15.000367 containerd[1474]: time="2025-09-13T00:09:15.000342976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:15.000618 containerd[1474]: time="2025-09-13T00:09:15.000452933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:15.023195 systemd[1]: Started cri-containerd-b35d9117b77b4d491edb5c4492c430cb0f245345f83a5e605ac3158eb7e36f50.scope - libcontainer container b35d9117b77b4d491edb5c4492c430cb0f245345f83a5e605ac3158eb7e36f50. Sep 13 00:09:15.027116 systemd[1]: Started cri-containerd-fec5a130bb834260ba65ef7c6f070d3182435767f7caa203a6cf1fa095d238c3.scope - libcontainer container fec5a130bb834260ba65ef7c6f070d3182435767f7caa203a6cf1fa095d238c3. Sep 13 00:09:15.076362 containerd[1474]: time="2025-09-13T00:09:15.076290229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-swnwz,Uid:6e5b11ba-4d0d-4e3c-a715-4626c8759548,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b35d9117b77b4d491edb5c4492c430cb0f245345f83a5e605ac3158eb7e36f50\"" Sep 13 00:09:15.080008 containerd[1474]: time="2025-09-13T00:09:15.079942365Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:09:15.083114 containerd[1474]: time="2025-09-13T00:09:15.082807869Z" level=info msg="StartContainer for \"fec5a130bb834260ba65ef7c6f070d3182435767f7caa203a6cf1fa095d238c3\" returns successfully" Sep 13 00:09:15.108027 kubelet[2553]: E0913 00:09:15.107680 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:15.154607 kubelet[2553]: E0913 00:09:15.154558 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:15.168308 kubelet[2553]: I0913 00:09:15.168211 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s6vkh" podStartSLOduration=1.168187608 podStartE2EDuration="1.168187608s" podCreationTimestamp="2025-09-13 00:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:09:15.125707684 +0000 UTC m=+6.137869095" watchObservedRunningTime="2025-09-13 00:09:15.168187608 +0000 UTC m=+6.180349019" Sep 13 00:09:15.496449 kubelet[2553]: E0913 00:09:15.496413 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:16.109080 kubelet[2553]: E0913 00:09:16.109033 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:16.109080 kubelet[2553]: E0913 00:09:16.109060 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:18.789030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723892783.mount: Deactivated successfully. Sep 13 00:09:21.246840 containerd[1474]: time="2025-09-13T00:09:21.246741598Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:21.251272 containerd[1474]: time="2025-09-13T00:09:21.251201514Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:09:21.252608 containerd[1474]: time="2025-09-13T00:09:21.252569268Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:21.254924 containerd[1474]: time="2025-09-13T00:09:21.254879857Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:21.255579 containerd[1474]: time="2025-09-13T00:09:21.255546463Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 6.175561767s" Sep 13 00:09:21.255579 containerd[1474]: time="2025-09-13T00:09:21.255575798Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:09:21.258257 containerd[1474]: time="2025-09-13T00:09:21.258203354Z" level=info msg="CreateContainer within sandbox \"b35d9117b77b4d491edb5c4492c430cb0f245345f83a5e605ac3158eb7e36f50\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:09:21.282647 containerd[1474]: time="2025-09-13T00:09:21.282582294Z" level=info msg="CreateContainer within sandbox \"b35d9117b77b4d491edb5c4492c430cb0f245345f83a5e605ac3158eb7e36f50\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e44e6229623af49a47c5f53a88f039c2cbede2e4c35b65334e0ed7889a3473bb\"" Sep 13 00:09:21.283016 containerd[1474]: time="2025-09-13T00:09:21.282953233Z" level=info msg="StartContainer for \"e44e6229623af49a47c5f53a88f039c2cbede2e4c35b65334e0ed7889a3473bb\"" Sep 13 00:09:21.318137 systemd[1]: Started cri-containerd-e44e6229623af49a47c5f53a88f039c2cbede2e4c35b65334e0ed7889a3473bb.scope - libcontainer container e44e6229623af49a47c5f53a88f039c2cbede2e4c35b65334e0ed7889a3473bb. Sep 13 00:09:21.751309 containerd[1474]: time="2025-09-13T00:09:21.751268302Z" level=info msg="StartContainer for \"e44e6229623af49a47c5f53a88f039c2cbede2e4c35b65334e0ed7889a3473bb\" returns successfully" Sep 13 00:09:22.275818 kubelet[2553]: E0913 00:09:22.275770 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:22.291046 kubelet[2553]: I0913 00:09:22.287644 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-swnwz" podStartSLOduration=2.110046261 podStartE2EDuration="8.287623234s" podCreationTimestamp="2025-09-13 00:09:14 +0000 UTC" firstStartedPulling="2025-09-13 00:09:15.07937733 +0000 UTC m=+6.091538741" lastFinishedPulling="2025-09-13 00:09:21.256954313 +0000 UTC m=+12.269115714" observedRunningTime="2025-09-13 00:09:22.153191042 +0000 UTC m=+13.165352463" watchObservedRunningTime="2025-09-13 00:09:22.287623234 +0000 UTC m=+13.299784645" Sep 13 00:09:28.060823 sudo[1661]: pam_unix(sudo:session): session closed for user root Sep 13 00:09:28.064428 sshd[1658]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:28.068255 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:09:28.069198 systemd[1]: sshd@8-10.0.0.89:22-10.0.0.1:45818.service: Deactivated successfully. Sep 13 00:09:28.074431 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:09:28.074718 systemd[1]: session-9.scope: Consumed 5.192s CPU time, 156.5M memory peak, 0B memory swap peak. Sep 13 00:09:28.079711 systemd-logind[1449]: Removed session 9. Sep 13 00:09:31.073286 systemd[1]: Created slice kubepods-besteffort-pod020a4b7d_6e2e_437c_b24a_f2796a9ebbbf.slice - libcontainer container kubepods-besteffort-pod020a4b7d_6e2e_437c_b24a_f2796a9ebbbf.slice. Sep 13 00:09:31.125989 kubelet[2553]: I0913 00:09:31.125905 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs57t\" (UniqueName: \"kubernetes.io/projected/020a4b7d-6e2e-437c-b24a-f2796a9ebbbf-kube-api-access-gs57t\") pod \"calico-typha-68b7b584fb-ncjmg\" (UID: \"020a4b7d-6e2e-437c-b24a-f2796a9ebbbf\") " pod="calico-system/calico-typha-68b7b584fb-ncjmg" Sep 13 00:09:31.126765 kubelet[2553]: I0913 00:09:31.125961 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/020a4b7d-6e2e-437c-b24a-f2796a9ebbbf-typha-certs\") pod \"calico-typha-68b7b584fb-ncjmg\" (UID: \"020a4b7d-6e2e-437c-b24a-f2796a9ebbbf\") " pod="calico-system/calico-typha-68b7b584fb-ncjmg" Sep 13 00:09:31.126765 kubelet[2553]: I0913 00:09:31.126091 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/020a4b7d-6e2e-437c-b24a-f2796a9ebbbf-tigera-ca-bundle\") pod \"calico-typha-68b7b584fb-ncjmg\" (UID: \"020a4b7d-6e2e-437c-b24a-f2796a9ebbbf\") " pod="calico-system/calico-typha-68b7b584fb-ncjmg" Sep 13 00:09:31.377481 kubelet[2553]: E0913 00:09:31.377433 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:31.378727 containerd[1474]: time="2025-09-13T00:09:31.378465980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68b7b584fb-ncjmg,Uid:020a4b7d-6e2e-437c-b24a-f2796a9ebbbf,Namespace:calico-system,Attempt:0,}" Sep 13 00:09:31.414936 containerd[1474]: time="2025-09-13T00:09:31.414737926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:31.415482 containerd[1474]: time="2025-09-13T00:09:31.415266039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:31.415482 containerd[1474]: time="2025-09-13T00:09:31.415386965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:31.416952 containerd[1474]: time="2025-09-13T00:09:31.416875904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:31.458262 systemd[1]: Started cri-containerd-4429e3b3d2c007c48e9563c1cc2b7e78e36ba9fe905854aad9c17385a0e4d202.scope - libcontainer container 4429e3b3d2c007c48e9563c1cc2b7e78e36ba9fe905854aad9c17385a0e4d202. Sep 13 00:09:31.482354 systemd[1]: Created slice kubepods-besteffort-pod52028fca_9cc9_4c38_8131_f9ab8053d3d2.slice - libcontainer container kubepods-besteffort-pod52028fca_9cc9_4c38_8131_f9ab8053d3d2.slice. Sep 13 00:09:31.524330 containerd[1474]: time="2025-09-13T00:09:31.524230877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68b7b584fb-ncjmg,Uid:020a4b7d-6e2e-437c-b24a-f2796a9ebbbf,Namespace:calico-system,Attempt:0,} returns sandbox id \"4429e3b3d2c007c48e9563c1cc2b7e78e36ba9fe905854aad9c17385a0e4d202\"" Sep 13 00:09:31.525208 kubelet[2553]: E0913 00:09:31.525144 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:31.526769 containerd[1474]: time="2025-09-13T00:09:31.526623633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:09:31.529509 kubelet[2553]: I0913 00:09:31.529473 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/52028fca-9cc9-4c38-8131-f9ab8053d3d2-cni-bin-dir\") pod \"calico-node-nvnsw\" (UID: \"52028fca-9cc9-4c38-8131-f9ab8053d3d2\") " pod="calico-system/calico-node-nvnsw" Sep 13 00:09:31.529608 kubelet[2553]: I0913 00:09:31.529517 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/52028fca-9cc9-4c38-8131-f9ab8053d3d2-var-lib-calico\") pod \"calico-node-nvnsw\" (UID: \"52028fca-9cc9-4c38-8131-f9ab8053d3d2\") " pod="calico-system/calico-node-nvnsw" Sep 13 00:09:31.529608 kubelet[2553]: I0913 00:09:31.529542 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/52028fca-9cc9-4c38-8131-f9ab8053d3d2-node-certs\") pod \"calico-node-nvnsw\" (UID: \"52028fca-9cc9-4c38-8131-f9ab8053d3d2\") " pod="calico-system/calico-node-nvnsw" Sep 13 00:09:31.529608 kubelet[2553]: I0913 00:09:31.529565 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52028fca-9cc9-4c38-8131-f9ab8053d3d2-lib-modules\") pod \"calico-node-nvnsw\" (UID: \"52028fca-9cc9-4c38-8131-f9ab8053d3d2\") " pod="calico-system/calico-node-nvnsw" Sep 13 00:09:31.529608 kubelet[2553]: I0913 00:09:31.529588 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/52028fca-9cc9-4c38-8131-f9ab8053d3d2-var-run-calico\") pod \"calico-node-nvnsw\" (UID: \"52028fca-9cc9-4c38-8131-f9ab8053d3d2\") " pod="calico-system/calico-node-nvnsw" Sep 13 00:09:31.529608 kubelet[2553]: I0913 00:09:31.529608 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/52028fca-9cc9-4c38-8131-f9ab8053d3d2-cni-log-dir\") pod \"calico-node-nvnsw\" (UID: \"52028fca-9cc9-4c38-8131-f9ab8053d3d2\") " pod="calico-system/calico-node-nvnsw" Sep 13 00:09:31.529836 kubelet[2553]: I0913 00:09:31.529628 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/52028fca-9cc9-4c38-8131-f9ab8053d3d2-cni-net-dir\") pod \"calico-node-nvnsw\" (UID: \"52028fca-9cc9-4c38-8131-f9ab8053d3d2\") " pod="calico-system/calico-node-nvnsw" Sep 13 00:09:31.529836 kubelet[2553]: I0913 00:09:31.529651 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52028fca-9cc9-4c38-8131-f9ab8053d3d2-tigera-ca-bundle\") pod \"calico-node-nvnsw\" (UID: \"52028fca-9cc9-4c38-8131-f9ab8053d3d2\") " pod="calico-system/calico-node-nvnsw" Sep 13 00:09:31.529836 kubelet[2553]: I0913 00:09:31.529672 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/52028fca-9cc9-4c38-8131-f9ab8053d3d2-policysync\") pod \"calico-node-nvnsw\" (UID: \"52028fca-9cc9-4c38-8131-f9ab8053d3d2\") " pod="calico-system/calico-node-nvnsw" Sep 13 00:09:31.529836 kubelet[2553]: I0913 00:09:31.529697 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/52028fca-9cc9-4c38-8131-f9ab8053d3d2-flexvol-driver-host\") pod \"calico-node-nvnsw\" (UID: \"52028fca-9cc9-4c38-8131-f9ab8053d3d2\") " pod="calico-system/calico-node-nvnsw" Sep 13 00:09:31.529836 kubelet[2553]: I0913 00:09:31.529750 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52028fca-9cc9-4c38-8131-f9ab8053d3d2-xtables-lock\") pod \"calico-node-nvnsw\" (UID: \"52028fca-9cc9-4c38-8131-f9ab8053d3d2\") " pod="calico-system/calico-node-nvnsw" Sep 13 00:09:31.630674 kubelet[2553]: I0913 00:09:31.630480 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4f9r\" (UniqueName: \"kubernetes.io/projected/52028fca-9cc9-4c38-8131-f9ab8053d3d2-kube-api-access-v4f9r\") pod \"calico-node-nvnsw\" (UID: \"52028fca-9cc9-4c38-8131-f9ab8053d3d2\") " pod="calico-system/calico-node-nvnsw" Sep 13 00:09:31.632911 kubelet[2553]: E0913 00:09:31.632870 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.632911 kubelet[2553]: W0913 00:09:31.632888 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.632911 kubelet[2553]: E0913 00:09:31.632917 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.636006 kubelet[2553]: E0913 00:09:31.635955 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.636084 kubelet[2553]: W0913 00:09:31.636024 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.636084 kubelet[2553]: E0913 00:09:31.636059 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.689069 kubelet[2553]: E0913 00:09:31.688810 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8tr4" podUID="2983a866-de88-4651-bc70-4e6c5a764426" Sep 13 00:09:31.732005 kubelet[2553]: E0913 00:09:31.731938 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.732005 kubelet[2553]: W0913 00:09:31.731982 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.732218 kubelet[2553]: E0913 00:09:31.732041 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.732513 kubelet[2553]: E0913 00:09:31.732479 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.732564 kubelet[2553]: W0913 00:09:31.732514 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.732564 kubelet[2553]: E0913 00:09:31.732539 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.734115 kubelet[2553]: E0913 00:09:31.734088 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.734115 kubelet[2553]: W0913 00:09:31.734109 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.734175 kubelet[2553]: E0913 00:09:31.734123 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.734433 kubelet[2553]: E0913 00:09:31.734413 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.734433 kubelet[2553]: W0913 00:09:31.734431 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.734494 kubelet[2553]: E0913 00:09:31.734447 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.734739 kubelet[2553]: E0913 00:09:31.734717 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.734739 kubelet[2553]: W0913 00:09:31.734735 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.734795 kubelet[2553]: E0913 00:09:31.734748 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.737430 kubelet[2553]: E0913 00:09:31.737379 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.737430 kubelet[2553]: W0913 00:09:31.737424 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.737576 kubelet[2553]: E0913 00:09:31.737451 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.737740 kubelet[2553]: E0913 00:09:31.737718 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.737740 kubelet[2553]: W0913 00:09:31.737735 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.737801 kubelet[2553]: E0913 00:09:31.737748 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.738021 kubelet[2553]: E0913 00:09:31.737987 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.738052 kubelet[2553]: W0913 00:09:31.738019 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.738052 kubelet[2553]: E0913 00:09:31.738033 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.738340 kubelet[2553]: E0913 00:09:31.738317 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.738340 kubelet[2553]: W0913 00:09:31.738335 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.738408 kubelet[2553]: E0913 00:09:31.738348 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.738618 kubelet[2553]: E0913 00:09:31.738598 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.738648 kubelet[2553]: W0913 00:09:31.738616 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.738648 kubelet[2553]: E0913 00:09:31.738630 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.741079 kubelet[2553]: E0913 00:09:31.741052 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.741079 kubelet[2553]: W0913 00:09:31.741075 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.741166 kubelet[2553]: E0913 00:09:31.741090 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.741410 kubelet[2553]: E0913 00:09:31.741386 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.741410 kubelet[2553]: W0913 00:09:31.741405 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.741470 kubelet[2553]: E0913 00:09:31.741418 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.741748 kubelet[2553]: E0913 00:09:31.741726 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.741787 kubelet[2553]: W0913 00:09:31.741748 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.741787 kubelet[2553]: E0913 00:09:31.741762 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.744134 kubelet[2553]: E0913 00:09:31.744105 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.744134 kubelet[2553]: W0913 00:09:31.744129 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.744236 kubelet[2553]: E0913 00:09:31.744145 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.745037 kubelet[2553]: E0913 00:09:31.744425 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.745037 kubelet[2553]: W0913 00:09:31.744438 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.745037 kubelet[2553]: E0913 00:09:31.744450 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.745037 kubelet[2553]: E0913 00:09:31.744687 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.745037 kubelet[2553]: W0913 00:09:31.744698 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.745037 kubelet[2553]: E0913 00:09:31.744711 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.745224 kubelet[2553]: E0913 00:09:31.745116 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.745224 kubelet[2553]: W0913 00:09:31.745129 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.745224 kubelet[2553]: E0913 00:09:31.745140 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.745389 kubelet[2553]: E0913 00:09:31.745359 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.745389 kubelet[2553]: W0913 00:09:31.745386 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.745454 kubelet[2553]: E0913 00:09:31.745401 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.748182 kubelet[2553]: E0913 00:09:31.748140 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.748182 kubelet[2553]: W0913 00:09:31.748175 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.748306 kubelet[2553]: E0913 00:09:31.748198 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.748706 kubelet[2553]: E0913 00:09:31.748681 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.748706 kubelet[2553]: W0913 00:09:31.748700 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.748770 kubelet[2553]: E0913 00:09:31.748714 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.752614 kubelet[2553]: E0913 00:09:31.751208 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.752614 kubelet[2553]: W0913 00:09:31.751232 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.752614 kubelet[2553]: E0913 00:09:31.751250 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.752614 kubelet[2553]: I0913 00:09:31.751303 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2983a866-de88-4651-bc70-4e6c5a764426-registration-dir\") pod \"csi-node-driver-t8tr4\" (UID: \"2983a866-de88-4651-bc70-4e6c5a764426\") " pod="calico-system/csi-node-driver-t8tr4" Sep 13 00:09:31.752614 kubelet[2553]: E0913 00:09:31.751666 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.752614 kubelet[2553]: W0913 00:09:31.751684 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.752614 kubelet[2553]: E0913 00:09:31.751717 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.752614 kubelet[2553]: I0913 00:09:31.751741 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5dr2\" (UniqueName: \"kubernetes.io/projected/2983a866-de88-4651-bc70-4e6c5a764426-kube-api-access-k5dr2\") pod \"csi-node-driver-t8tr4\" (UID: \"2983a866-de88-4651-bc70-4e6c5a764426\") " pod="calico-system/csi-node-driver-t8tr4" Sep 13 00:09:31.752614 kubelet[2553]: E0913 00:09:31.752069 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.752887 kubelet[2553]: W0913 00:09:31.752084 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.752887 kubelet[2553]: E0913 00:09:31.752103 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.755406 kubelet[2553]: E0913 00:09:31.755361 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.755406 kubelet[2553]: W0913 00:09:31.755399 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.755530 kubelet[2553]: E0913 00:09:31.755506 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.755921 kubelet[2553]: E0913 00:09:31.755801 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.755921 kubelet[2553]: W0913 00:09:31.755817 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.755921 kubelet[2553]: E0913 00:09:31.755851 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.756149 kubelet[2553]: E0913 00:09:31.756127 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.756212 kubelet[2553]: W0913 00:09:31.756193 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.756258 kubelet[2553]: E0913 00:09:31.756214 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.756258 kubelet[2553]: I0913 00:09:31.756240 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2983a866-de88-4651-bc70-4e6c5a764426-socket-dir\") pod \"csi-node-driver-t8tr4\" (UID: \"2983a866-de88-4651-bc70-4e6c5a764426\") " pod="calico-system/csi-node-driver-t8tr4" Sep 13 00:09:31.756498 kubelet[2553]: E0913 00:09:31.756478 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.756498 kubelet[2553]: W0913 00:09:31.756493 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.756576 kubelet[2553]: E0913 00:09:31.756505 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.756750 kubelet[2553]: E0913 00:09:31.756737 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.756750 kubelet[2553]: W0913 00:09:31.756747 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.756818 kubelet[2553]: E0913 00:09:31.756759 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.756959 kubelet[2553]: E0913 00:09:31.756946 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.756959 kubelet[2553]: W0913 00:09:31.756957 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.757037 kubelet[2553]: E0913 00:09:31.756968 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.757189 kubelet[2553]: E0913 00:09:31.757175 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.757189 kubelet[2553]: W0913 00:09:31.757187 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.757239 kubelet[2553]: E0913 00:09:31.757196 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.757239 kubelet[2553]: I0913 00:09:31.757213 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2983a866-de88-4651-bc70-4e6c5a764426-kubelet-dir\") pod \"csi-node-driver-t8tr4\" (UID: \"2983a866-de88-4651-bc70-4e6c5a764426\") " pod="calico-system/csi-node-driver-t8tr4" Sep 13 00:09:31.757428 kubelet[2553]: E0913 00:09:31.757414 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.757428 kubelet[2553]: W0913 00:09:31.757425 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.757553 kubelet[2553]: E0913 00:09:31.757439 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.757794 kubelet[2553]: E0913 00:09:31.757766 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.757937 kubelet[2553]: W0913 00:09:31.757866 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.757937 kubelet[2553]: E0913 00:09:31.757893 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.758352 kubelet[2553]: E0913 00:09:31.758250 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.758352 kubelet[2553]: W0913 00:09:31.758273 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.758352 kubelet[2553]: E0913 00:09:31.758291 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.758756 kubelet[2553]: E0913 00:09:31.758669 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.758756 kubelet[2553]: W0913 00:09:31.758680 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.758756 kubelet[2553]: E0913 00:09:31.758695 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.758756 kubelet[2553]: I0913 00:09:31.758712 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2983a866-de88-4651-bc70-4e6c5a764426-varrun\") pod \"csi-node-driver-t8tr4\" (UID: \"2983a866-de88-4651-bc70-4e6c5a764426\") " pod="calico-system/csi-node-driver-t8tr4" Sep 13 00:09:31.759063 kubelet[2553]: E0913 00:09:31.759036 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.759063 kubelet[2553]: W0913 00:09:31.759061 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.759158 kubelet[2553]: E0913 00:09:31.759083 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.759461 kubelet[2553]: E0913 00:09:31.759437 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.759461 kubelet[2553]: W0913 00:09:31.759454 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.759537 kubelet[2553]: E0913 00:09:31.759486 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.759771 kubelet[2553]: E0913 00:09:31.759747 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.759771 kubelet[2553]: W0913 00:09:31.759760 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.759950 kubelet[2553]: E0913 00:09:31.759787 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.760126 kubelet[2553]: E0913 00:09:31.760105 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.760126 kubelet[2553]: W0913 00:09:31.760116 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.760227 kubelet[2553]: E0913 00:09:31.760189 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.763175 kubelet[2553]: E0913 00:09:31.762125 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.763175 kubelet[2553]: W0913 00:09:31.762140 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.763175 kubelet[2553]: E0913 00:09:31.762154 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.763175 kubelet[2553]: E0913 00:09:31.762456 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.763175 kubelet[2553]: W0913 00:09:31.762469 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.763175 kubelet[2553]: E0913 00:09:31.762482 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.770586 kubelet[2553]: E0913 00:09:31.770500 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.770586 kubelet[2553]: W0913 00:09:31.770527 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.770586 kubelet[2553]: E0913 00:09:31.770553 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.787046 containerd[1474]: time="2025-09-13T00:09:31.786599306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nvnsw,Uid:52028fca-9cc9-4c38-8131-f9ab8053d3d2,Namespace:calico-system,Attempt:0,}" Sep 13 00:09:31.821649 containerd[1474]: time="2025-09-13T00:09:31.820894017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:31.821649 containerd[1474]: time="2025-09-13T00:09:31.821620933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:31.821649 containerd[1474]: time="2025-09-13T00:09:31.821652873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:31.821845 containerd[1474]: time="2025-09-13T00:09:31.821763350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:31.847474 systemd[1]: Started cri-containerd-4c1463209d6904a980d2f150b9e48fb515680b9fe7fa7ca3cab3d61ab47c2a15.scope - libcontainer container 4c1463209d6904a980d2f150b9e48fb515680b9fe7fa7ca3cab3d61ab47c2a15. Sep 13 00:09:31.860322 kubelet[2553]: E0913 00:09:31.860249 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.860322 kubelet[2553]: W0913 00:09:31.860310 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.860463 kubelet[2553]: E0913 00:09:31.860341 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.860814 kubelet[2553]: E0913 00:09:31.860776 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.860814 kubelet[2553]: W0913 00:09:31.860794 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.860814 kubelet[2553]: E0913 00:09:31.860813 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.861213 kubelet[2553]: E0913 00:09:31.861195 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.861213 kubelet[2553]: W0913 00:09:31.861210 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.861273 kubelet[2553]: E0913 00:09:31.861227 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.861532 kubelet[2553]: E0913 00:09:31.861499 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.861532 kubelet[2553]: W0913 00:09:31.861525 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.861588 kubelet[2553]: E0913 00:09:31.861541 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.861824 kubelet[2553]: E0913 00:09:31.861797 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.861848 kubelet[2553]: W0913 00:09:31.861827 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.861957 kubelet[2553]: E0913 00:09:31.861930 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.862315 kubelet[2553]: E0913 00:09:31.862278 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.862315 kubelet[2553]: W0913 00:09:31.862292 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.862387 kubelet[2553]: E0913 00:09:31.862323 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.862568 kubelet[2553]: E0913 00:09:31.862546 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.862605 kubelet[2553]: W0913 00:09:31.862580 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.862702 kubelet[2553]: E0913 00:09:31.862686 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.863043 kubelet[2553]: E0913 00:09:31.863026 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.863095 kubelet[2553]: W0913 00:09:31.863042 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.863095 kubelet[2553]: E0913 00:09:31.863079 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.864438 kubelet[2553]: E0913 00:09:31.863324 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.864438 kubelet[2553]: W0913 00:09:31.864045 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.864438 kubelet[2553]: E0913 00:09:31.864111 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.864567 kubelet[2553]: E0913 00:09:31.864541 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.864567 kubelet[2553]: W0913 00:09:31.864555 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.864714 kubelet[2553]: E0913 00:09:31.864695 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.864864 kubelet[2553]: E0913 00:09:31.864838 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.864864 kubelet[2553]: W0913 00:09:31.864853 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.864987 kubelet[2553]: E0913 00:09:31.864896 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.865252 kubelet[2553]: E0913 00:09:31.865228 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.865252 kubelet[2553]: W0913 00:09:31.865244 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.865395 kubelet[2553]: E0913 00:09:31.865378 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.865559 kubelet[2553]: E0913 00:09:31.865544 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.865586 kubelet[2553]: W0913 00:09:31.865558 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.865745 kubelet[2553]: E0913 00:09:31.865676 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.865896 kubelet[2553]: E0913 00:09:31.865879 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.865896 kubelet[2553]: W0913 00:09:31.865891 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.866065 kubelet[2553]: E0913 00:09:31.866029 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.866199 kubelet[2553]: E0913 00:09:31.866185 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.866245 kubelet[2553]: W0913 00:09:31.866197 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.866336 kubelet[2553]: E0913 00:09:31.866316 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.866628 kubelet[2553]: E0913 00:09:31.866600 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.866628 kubelet[2553]: W0913 00:09:31.866614 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.866747 kubelet[2553]: E0913 00:09:31.866732 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.866895 kubelet[2553]: E0913 00:09:31.866870 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.866895 kubelet[2553]: W0913 00:09:31.866884 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.866961 kubelet[2553]: E0913 00:09:31.866941 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.867486 kubelet[2553]: E0913 00:09:31.867467 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.867486 kubelet[2553]: W0913 00:09:31.867481 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.867596 kubelet[2553]: E0913 00:09:31.867580 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.867772 kubelet[2553]: E0913 00:09:31.867756 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.867809 kubelet[2553]: W0913 00:09:31.867782 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.869050 kubelet[2553]: E0913 00:09:31.868968 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.869415 kubelet[2553]: E0913 00:09:31.869261 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.869415 kubelet[2553]: W0913 00:09:31.869276 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.869415 kubelet[2553]: E0913 00:09:31.869377 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.869798 kubelet[2553]: E0913 00:09:31.869745 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.869798 kubelet[2553]: W0913 00:09:31.869761 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.869881 kubelet[2553]: E0913 00:09:31.869858 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.870222 kubelet[2553]: E0913 00:09:31.870188 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.872167 kubelet[2553]: W0913 00:09:31.872136 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.872331 kubelet[2553]: E0913 00:09:31.872312 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.873342 kubelet[2553]: E0913 00:09:31.872797 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.873342 kubelet[2553]: W0913 00:09:31.872816 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.873342 kubelet[2553]: E0913 00:09:31.872963 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.873342 kubelet[2553]: E0913 00:09:31.873099 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.873342 kubelet[2553]: W0913 00:09:31.873115 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.873342 kubelet[2553]: E0913 00:09:31.873328 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.873585 kubelet[2553]: E0913 00:09:31.873459 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.873585 kubelet[2553]: W0913 00:09:31.873470 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.873585 kubelet[2553]: E0913 00:09:31.873483 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.886808 kubelet[2553]: E0913 00:09:31.886712 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:31.886808 kubelet[2553]: W0913 00:09:31.886738 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:31.886808 kubelet[2553]: E0913 00:09:31.886765 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:31.887490 containerd[1474]: time="2025-09-13T00:09:31.887448305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nvnsw,Uid:52028fca-9cc9-4c38-8131-f9ab8053d3d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c1463209d6904a980d2f150b9e48fb515680b9fe7fa7ca3cab3d61ab47c2a15\"" Sep 13 00:09:33.010207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1481783889.mount: Deactivated successfully. Sep 13 00:09:33.385300 containerd[1474]: time="2025-09-13T00:09:33.385227130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:33.386843 containerd[1474]: time="2025-09-13T00:09:33.386769517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:09:33.387909 containerd[1474]: time="2025-09-13T00:09:33.387868722Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:33.390137 containerd[1474]: time="2025-09-13T00:09:33.390092480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:33.390731 containerd[1474]: time="2025-09-13T00:09:33.390696224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 1.864034049s" Sep 13 00:09:33.390731 containerd[1474]: time="2025-09-13T00:09:33.390724718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:09:33.395013 containerd[1474]: time="2025-09-13T00:09:33.394958150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:09:33.415829 containerd[1474]: time="2025-09-13T00:09:33.415770438Z" level=info msg="CreateContainer within sandbox \"4429e3b3d2c007c48e9563c1cc2b7e78e36ba9fe905854aad9c17385a0e4d202\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:09:33.431314 containerd[1474]: time="2025-09-13T00:09:33.431239007Z" level=info msg="CreateContainer within sandbox \"4429e3b3d2c007c48e9563c1cc2b7e78e36ba9fe905854aad9c17385a0e4d202\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b5ac31019ba8074c904e96a4e4acbbf1c90538c1509fe07f18cff47897bec70f\"" Sep 13 00:09:33.434906 containerd[1474]: time="2025-09-13T00:09:33.434876921Z" level=info msg="StartContainer for \"b5ac31019ba8074c904e96a4e4acbbf1c90538c1509fe07f18cff47897bec70f\"" Sep 13 00:09:33.467259 systemd[1]: Started cri-containerd-b5ac31019ba8074c904e96a4e4acbbf1c90538c1509fe07f18cff47897bec70f.scope - libcontainer container b5ac31019ba8074c904e96a4e4acbbf1c90538c1509fe07f18cff47897bec70f. Sep 13 00:09:33.514557 containerd[1474]: time="2025-09-13T00:09:33.514482287Z" level=info msg="StartContainer for \"b5ac31019ba8074c904e96a4e4acbbf1c90538c1509fe07f18cff47897bec70f\" returns successfully" Sep 13 00:09:34.086783 kubelet[2553]: E0913 00:09:34.086679 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8tr4" podUID="2983a866-de88-4651-bc70-4e6c5a764426" Sep 13 00:09:34.153538 kubelet[2553]: E0913 00:09:34.153486 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:34.167098 kubelet[2553]: E0913 00:09:34.167040 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.167098 kubelet[2553]: W0913 00:09:34.167077 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.167098 kubelet[2553]: E0913 00:09:34.167111 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.167573 kubelet[2553]: E0913 00:09:34.167547 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.167573 kubelet[2553]: W0913 00:09:34.167562 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.167573 kubelet[2553]: E0913 00:09:34.167573 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.167830 kubelet[2553]: E0913 00:09:34.167815 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.167830 kubelet[2553]: W0913 00:09:34.167827 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.167914 kubelet[2553]: E0913 00:09:34.167837 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.168121 kubelet[2553]: E0913 00:09:34.168104 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.168121 kubelet[2553]: W0913 00:09:34.168117 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.168212 kubelet[2553]: E0913 00:09:34.168128 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.168391 kubelet[2553]: E0913 00:09:34.168373 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.168391 kubelet[2553]: W0913 00:09:34.168386 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.168480 kubelet[2553]: E0913 00:09:34.168407 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.168639 kubelet[2553]: E0913 00:09:34.168623 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.168639 kubelet[2553]: W0913 00:09:34.168635 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.168730 kubelet[2553]: E0913 00:09:34.168647 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.168866 kubelet[2553]: E0913 00:09:34.168851 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.168866 kubelet[2553]: W0913 00:09:34.168862 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.168965 kubelet[2553]: E0913 00:09:34.168872 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.169124 kubelet[2553]: E0913 00:09:34.169108 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.169124 kubelet[2553]: W0913 00:09:34.169120 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.169376 kubelet[2553]: E0913 00:09:34.169131 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.169418 kubelet[2553]: E0913 00:09:34.169382 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.169418 kubelet[2553]: W0913 00:09:34.169392 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.169418 kubelet[2553]: E0913 00:09:34.169402 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.169625 kubelet[2553]: E0913 00:09:34.169610 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.169625 kubelet[2553]: W0913 00:09:34.169622 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.169714 kubelet[2553]: E0913 00:09:34.169633 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.169851 kubelet[2553]: E0913 00:09:34.169836 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.169851 kubelet[2553]: W0913 00:09:34.169848 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.169926 kubelet[2553]: E0913 00:09:34.169858 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.170097 kubelet[2553]: E0913 00:09:34.170081 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.170097 kubelet[2553]: W0913 00:09:34.170093 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.170190 kubelet[2553]: E0913 00:09:34.170103 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.170349 kubelet[2553]: E0913 00:09:34.170333 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.170349 kubelet[2553]: W0913 00:09:34.170345 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.170433 kubelet[2553]: E0913 00:09:34.170355 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.170609 kubelet[2553]: E0913 00:09:34.170593 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.170609 kubelet[2553]: W0913 00:09:34.170605 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.170687 kubelet[2553]: E0913 00:09:34.170615 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.170826 kubelet[2553]: E0913 00:09:34.170811 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.170826 kubelet[2553]: W0913 00:09:34.170822 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.170903 kubelet[2553]: E0913 00:09:34.170833 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.180541 kubelet[2553]: E0913 00:09:34.180474 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.180541 kubelet[2553]: W0913 00:09:34.180507 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.180541 kubelet[2553]: E0913 00:09:34.180546 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.180882 kubelet[2553]: E0913 00:09:34.180859 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.180882 kubelet[2553]: W0913 00:09:34.180872 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.180963 kubelet[2553]: E0913 00:09:34.180889 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.181271 kubelet[2553]: E0913 00:09:34.181223 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.181271 kubelet[2553]: W0913 00:09:34.181258 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.181379 kubelet[2553]: E0913 00:09:34.181297 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.181548 kubelet[2553]: E0913 00:09:34.181529 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.181548 kubelet[2553]: W0913 00:09:34.181542 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.181639 kubelet[2553]: E0913 00:09:34.181556 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.181758 kubelet[2553]: E0913 00:09:34.181740 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.181758 kubelet[2553]: W0913 00:09:34.181752 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.181842 kubelet[2553]: E0913 00:09:34.181769 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.182046 kubelet[2553]: E0913 00:09:34.182030 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.182046 kubelet[2553]: W0913 00:09:34.182042 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.182128 kubelet[2553]: E0913 00:09:34.182057 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.182406 kubelet[2553]: E0913 00:09:34.182373 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.182406 kubelet[2553]: W0913 00:09:34.182389 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.182406 kubelet[2553]: E0913 00:09:34.182405 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.182692 kubelet[2553]: E0913 00:09:34.182662 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.182692 kubelet[2553]: W0913 00:09:34.182674 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.182788 kubelet[2553]: E0913 00:09:34.182721 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.182912 kubelet[2553]: E0913 00:09:34.182891 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.182912 kubelet[2553]: W0913 00:09:34.182903 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.183034 kubelet[2553]: E0913 00:09:34.182941 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.183148 kubelet[2553]: E0913 00:09:34.183127 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.183148 kubelet[2553]: W0913 00:09:34.183139 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.183245 kubelet[2553]: E0913 00:09:34.183152 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.183423 kubelet[2553]: E0913 00:09:34.183390 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.183423 kubelet[2553]: W0913 00:09:34.183404 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.183423 kubelet[2553]: E0913 00:09:34.183419 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.183636 kubelet[2553]: E0913 00:09:34.183616 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.183636 kubelet[2553]: W0913 00:09:34.183628 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.183721 kubelet[2553]: E0913 00:09:34.183641 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.183871 kubelet[2553]: E0913 00:09:34.183851 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.183871 kubelet[2553]: W0913 00:09:34.183863 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.183967 kubelet[2553]: E0913 00:09:34.183879 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.184242 kubelet[2553]: E0913 00:09:34.184204 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.184242 kubelet[2553]: W0913 00:09:34.184220 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.184242 kubelet[2553]: E0913 00:09:34.184237 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.184468 kubelet[2553]: E0913 00:09:34.184444 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.184468 kubelet[2553]: W0913 00:09:34.184456 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.184468 kubelet[2553]: E0913 00:09:34.184474 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.184740 kubelet[2553]: E0913 00:09:34.184716 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.184740 kubelet[2553]: W0913 00:09:34.184727 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.184740 kubelet[2553]: E0913 00:09:34.184740 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.185006 kubelet[2553]: E0913 00:09:34.184972 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.185006 kubelet[2553]: W0913 00:09:34.184986 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.185111 kubelet[2553]: E0913 00:09:34.185019 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.185268 kubelet[2553]: E0913 00:09:34.185251 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:09:34.185268 kubelet[2553]: W0913 00:09:34.185264 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:09:34.185344 kubelet[2553]: E0913 00:09:34.185275 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:09:34.351033 kubelet[2553]: I0913 00:09:34.350544 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68b7b584fb-ncjmg" podStartSLOduration=1.481833007 podStartE2EDuration="3.350524345s" podCreationTimestamp="2025-09-13 00:09:31 +0000 UTC" firstStartedPulling="2025-09-13 00:09:31.52607383 +0000 UTC m=+22.538235241" lastFinishedPulling="2025-09-13 00:09:33.394765168 +0000 UTC m=+24.406926579" observedRunningTime="2025-09-13 00:09:34.350137529 +0000 UTC m=+25.362298940" watchObservedRunningTime="2025-09-13 00:09:34.350524345 +0000 UTC m=+25.362685756" Sep 13 00:09:34.972699 containerd[1474]: time="2025-09-13T00:09:34.972592940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:34.976245 containerd[1474]: time="2025-09-13T00:09:34.976161032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:09:34.977758 containerd[1474]: time="2025-09-13T00:09:34.977681238Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:34.980076 containerd[1474]: time="2025-09-13T00:09:34.980033536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:34.980931 containerd[1474]: time="2025-09-13T00:09:34.980862463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.585865261s" Sep 13 00:09:34.980931 containerd[1474]: time="2025-09-13T00:09:34.980911085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:09:34.983585 containerd[1474]: time="2025-09-13T00:09:34.983538660Z" level=info msg="CreateContainer within sandbox \"4c1463209d6904a980d2f150b9e48fb515680b9fe7fa7ca3cab3d61ab47c2a15\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:09:35.008691 containerd[1474]: time="2025-09-13T00:09:35.008625571Z" level=info msg="CreateContainer within sandbox \"4c1463209d6904a980d2f150b9e48fb515680b9fe7fa7ca3cab3d61ab47c2a15\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7201776d1d53f257c3588832d237812d9446ae92701cb151588405826e32815e\"" Sep 13 00:09:35.009900 containerd[1474]: time="2025-09-13T00:09:35.009828281Z" level=info msg="StartContainer for \"7201776d1d53f257c3588832d237812d9446ae92701cb151588405826e32815e\"" Sep 13 00:09:35.049178 systemd[1]: Started cri-containerd-7201776d1d53f257c3588832d237812d9446ae92701cb151588405826e32815e.scope - libcontainer container 7201776d1d53f257c3588832d237812d9446ae92701cb151588405826e32815e. Sep 13 00:09:35.093847 containerd[1474]: time="2025-09-13T00:09:35.093767465Z" level=info msg="StartContainer for \"7201776d1d53f257c3588832d237812d9446ae92701cb151588405826e32815e\" returns successfully" Sep 13 00:09:35.108256 systemd[1]: cri-containerd-7201776d1d53f257c3588832d237812d9446ae92701cb151588405826e32815e.scope: Deactivated successfully. Sep 13 00:09:35.157360 kubelet[2553]: I0913 00:09:35.157296 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:09:35.158068 kubelet[2553]: E0913 00:09:35.157747 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:35.411289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7201776d1d53f257c3588832d237812d9446ae92701cb151588405826e32815e-rootfs.mount: Deactivated successfully. Sep 13 00:09:35.557963 containerd[1474]: time="2025-09-13T00:09:35.554856010Z" level=info msg="shim disconnected" id=7201776d1d53f257c3588832d237812d9446ae92701cb151588405826e32815e namespace=k8s.io Sep 13 00:09:35.557963 containerd[1474]: time="2025-09-13T00:09:35.557968376Z" level=warning msg="cleaning up after shim disconnected" id=7201776d1d53f257c3588832d237812d9446ae92701cb151588405826e32815e namespace=k8s.io Sep 13 00:09:35.558256 containerd[1474]: time="2025-09-13T00:09:35.557988724Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:36.084840 kubelet[2553]: E0913 00:09:36.084757 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8tr4" podUID="2983a866-de88-4651-bc70-4e6c5a764426" Sep 13 00:09:36.164011 containerd[1474]: time="2025-09-13T00:09:36.162897082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:09:38.084592 kubelet[2553]: E0913 00:09:38.084505 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8tr4" podUID="2983a866-de88-4651-bc70-4e6c5a764426" Sep 13 00:09:38.960525 containerd[1474]: time="2025-09-13T00:09:38.960441598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:38.961423 containerd[1474]: time="2025-09-13T00:09:38.961316952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:09:38.964506 containerd[1474]: time="2025-09-13T00:09:38.964437572Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:38.967561 containerd[1474]: time="2025-09-13T00:09:38.967510672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:38.968373 containerd[1474]: time="2025-09-13T00:09:38.968334009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.805382344s" Sep 13 00:09:38.968446 containerd[1474]: time="2025-09-13T00:09:38.968379704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:09:38.971399 containerd[1474]: time="2025-09-13T00:09:38.971362475Z" level=info msg="CreateContainer within sandbox \"4c1463209d6904a980d2f150b9e48fb515680b9fe7fa7ca3cab3d61ab47c2a15\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:09:38.986850 containerd[1474]: time="2025-09-13T00:09:38.986787468Z" level=info msg="CreateContainer within sandbox \"4c1463209d6904a980d2f150b9e48fb515680b9fe7fa7ca3cab3d61ab47c2a15\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"26f710ced97485f6efacd14b7b234dd0efcb3228750e71f4ffe2056522e079ce\"" Sep 13 00:09:38.987477 containerd[1474]: time="2025-09-13T00:09:38.987448159Z" level=info msg="StartContainer for \"26f710ced97485f6efacd14b7b234dd0efcb3228750e71f4ffe2056522e079ce\"" Sep 13 00:09:39.034326 systemd[1]: Started cri-containerd-26f710ced97485f6efacd14b7b234dd0efcb3228750e71f4ffe2056522e079ce.scope - libcontainer container 26f710ced97485f6efacd14b7b234dd0efcb3228750e71f4ffe2056522e079ce. Sep 13 00:09:39.068014 containerd[1474]: time="2025-09-13T00:09:39.067878163Z" level=info msg="StartContainer for \"26f710ced97485f6efacd14b7b234dd0efcb3228750e71f4ffe2056522e079ce\" returns successfully" Sep 13 00:09:40.085554 kubelet[2553]: E0913 00:09:40.085452 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8tr4" podUID="2983a866-de88-4651-bc70-4e6c5a764426" Sep 13 00:09:40.361885 containerd[1474]: time="2025-09-13T00:09:40.361736157Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:09:40.366741 systemd[1]: cri-containerd-26f710ced97485f6efacd14b7b234dd0efcb3228750e71f4ffe2056522e079ce.scope: Deactivated successfully. Sep 13 00:09:40.388174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26f710ced97485f6efacd14b7b234dd0efcb3228750e71f4ffe2056522e079ce-rootfs.mount: Deactivated successfully. Sep 13 00:09:40.462121 kubelet[2553]: I0913 00:09:40.462057 2553 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:09:41.032780 kubelet[2553]: I0913 00:09:41.032670 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n67v\" (UniqueName: \"kubernetes.io/projected/925826ed-f268-4368-bf97-95adf0976969-kube-api-access-7n67v\") pod \"calico-apiserver-76f56b585c-zr49b\" (UID: \"925826ed-f268-4368-bf97-95adf0976969\") " pod="calico-apiserver/calico-apiserver-76f56b585c-zr49b" Sep 13 00:09:41.032780 kubelet[2553]: I0913 00:09:41.032751 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/925826ed-f268-4368-bf97-95adf0976969-calico-apiserver-certs\") pod \"calico-apiserver-76f56b585c-zr49b\" (UID: \"925826ed-f268-4368-bf97-95adf0976969\") " pod="calico-apiserver/calico-apiserver-76f56b585c-zr49b" Sep 13 00:09:41.123625 systemd[1]: Created slice kubepods-besteffort-pod925826ed_f268_4368_bf97_95adf0976969.slice - libcontainer container kubepods-besteffort-pod925826ed_f268_4368_bf97_95adf0976969.slice. Sep 13 00:09:41.128801 systemd[1]: Created slice kubepods-besteffort-podbda40bd6_a908_46e8_9d69_02dbc2713a08.slice - libcontainer container kubepods-besteffort-podbda40bd6_a908_46e8_9d69_02dbc2713a08.slice. Sep 13 00:09:41.134109 kubelet[2553]: I0913 00:09:41.133022 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bda40bd6-a908-46e8-9d69-02dbc2713a08-tigera-ca-bundle\") pod \"calico-kube-controllers-7854f6d79d-nbc6q\" (UID: \"bda40bd6-a908-46e8-9d69-02dbc2713a08\") " pod="calico-system/calico-kube-controllers-7854f6d79d-nbc6q" Sep 13 00:09:41.134109 kubelet[2553]: I0913 00:09:41.133087 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slhz8\" (UniqueName: \"kubernetes.io/projected/a73c3ca8-469a-40ee-8971-a5fa4ecc6460-kube-api-access-slhz8\") pod \"calico-apiserver-76f56b585c-gwf2q\" (UID: \"a73c3ca8-469a-40ee-8971-a5fa4ecc6460\") " pod="calico-apiserver/calico-apiserver-76f56b585c-gwf2q" Sep 13 00:09:41.134109 kubelet[2553]: I0913 00:09:41.133114 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b17de5d2-115f-45db-a278-f3d4d17bee79-goldmane-ca-bundle\") pod \"goldmane-7988f88666-gg29f\" (UID: \"b17de5d2-115f-45db-a278-f3d4d17bee79\") " pod="calico-system/goldmane-7988f88666-gg29f" Sep 13 00:09:41.134109 kubelet[2553]: I0913 00:09:41.133151 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swb9w\" (UniqueName: \"kubernetes.io/projected/a4e7348d-2cc7-4002-b829-c570ed8d1df0-kube-api-access-swb9w\") pod \"coredns-7c65d6cfc9-q44l7\" (UID: \"a4e7348d-2cc7-4002-b829-c570ed8d1df0\") " pod="kube-system/coredns-7c65d6cfc9-q44l7" Sep 13 00:09:41.134109 kubelet[2553]: I0913 00:09:41.133173 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc6b5\" (UniqueName: \"kubernetes.io/projected/bda40bd6-a908-46e8-9d69-02dbc2713a08-kube-api-access-gc6b5\") pod \"calico-kube-controllers-7854f6d79d-nbc6q\" (UID: \"bda40bd6-a908-46e8-9d69-02dbc2713a08\") " pod="calico-system/calico-kube-controllers-7854f6d79d-nbc6q" Sep 13 00:09:41.133394 systemd[1]: Created slice kubepods-burstable-pod60e91eea_bf80_4775_825e_60892cf59ae7.slice - libcontainer container kubepods-burstable-pod60e91eea_bf80_4775_825e_60892cf59ae7.slice. Sep 13 00:09:41.135175 kubelet[2553]: I0913 00:09:41.133196 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nsjv\" (UniqueName: \"kubernetes.io/projected/b17de5d2-115f-45db-a278-f3d4d17bee79-kube-api-access-2nsjv\") pod \"goldmane-7988f88666-gg29f\" (UID: \"b17de5d2-115f-45db-a278-f3d4d17bee79\") " pod="calico-system/goldmane-7988f88666-gg29f" Sep 13 00:09:41.135175 kubelet[2553]: I0913 00:09:41.133221 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a73c3ca8-469a-40ee-8971-a5fa4ecc6460-calico-apiserver-certs\") pod \"calico-apiserver-76f56b585c-gwf2q\" (UID: \"a73c3ca8-469a-40ee-8971-a5fa4ecc6460\") " pod="calico-apiserver/calico-apiserver-76f56b585c-gwf2q" Sep 13 00:09:41.135175 kubelet[2553]: I0913 00:09:41.133243 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60e91eea-bf80-4775-825e-60892cf59ae7-config-volume\") pod \"coredns-7c65d6cfc9-bqltl\" (UID: \"60e91eea-bf80-4775-825e-60892cf59ae7\") " pod="kube-system/coredns-7c65d6cfc9-bqltl" Sep 13 00:09:41.135175 kubelet[2553]: I0913 00:09:41.133269 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b17de5d2-115f-45db-a278-f3d4d17bee79-config\") pod \"goldmane-7988f88666-gg29f\" (UID: \"b17de5d2-115f-45db-a278-f3d4d17bee79\") " pod="calico-system/goldmane-7988f88666-gg29f" Sep 13 00:09:41.135175 kubelet[2553]: I0913 00:09:41.133289 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/085df67f-1fcd-45a7-a238-37262e9dcfa2-whisker-ca-bundle\") pod \"whisker-759dcc9f77-fl2jj\" (UID: \"085df67f-1fcd-45a7-a238-37262e9dcfa2\") " pod="calico-system/whisker-759dcc9f77-fl2jj" Sep 13 00:09:41.135404 kubelet[2553]: I0913 00:09:41.133350 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b17de5d2-115f-45db-a278-f3d4d17bee79-goldmane-key-pair\") pod \"goldmane-7988f88666-gg29f\" (UID: \"b17de5d2-115f-45db-a278-f3d4d17bee79\") " pod="calico-system/goldmane-7988f88666-gg29f" Sep 13 00:09:41.135404 kubelet[2553]: I0913 00:09:41.133373 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4e7348d-2cc7-4002-b829-c570ed8d1df0-config-volume\") pod \"coredns-7c65d6cfc9-q44l7\" (UID: \"a4e7348d-2cc7-4002-b829-c570ed8d1df0\") " pod="kube-system/coredns-7c65d6cfc9-q44l7" Sep 13 00:09:41.135404 kubelet[2553]: I0913 00:09:41.133393 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/085df67f-1fcd-45a7-a238-37262e9dcfa2-whisker-backend-key-pair\") pod \"whisker-759dcc9f77-fl2jj\" (UID: \"085df67f-1fcd-45a7-a238-37262e9dcfa2\") " pod="calico-system/whisker-759dcc9f77-fl2jj" Sep 13 00:09:41.135404 kubelet[2553]: I0913 00:09:41.133415 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbx9z\" (UniqueName: \"kubernetes.io/projected/085df67f-1fcd-45a7-a238-37262e9dcfa2-kube-api-access-hbx9z\") pod \"whisker-759dcc9f77-fl2jj\" (UID: \"085df67f-1fcd-45a7-a238-37262e9dcfa2\") " pod="calico-system/whisker-759dcc9f77-fl2jj" Sep 13 00:09:41.135404 kubelet[2553]: I0913 00:09:41.133440 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plhqb\" (UniqueName: \"kubernetes.io/projected/60e91eea-bf80-4775-825e-60892cf59ae7-kube-api-access-plhqb\") pod \"coredns-7c65d6cfc9-bqltl\" (UID: \"60e91eea-bf80-4775-825e-60892cf59ae7\") " pod="kube-system/coredns-7c65d6cfc9-bqltl" Sep 13 00:09:41.149590 containerd[1474]: time="2025-09-13T00:09:41.148459483Z" level=info msg="shim disconnected" id=26f710ced97485f6efacd14b7b234dd0efcb3228750e71f4ffe2056522e079ce namespace=k8s.io Sep 13 00:09:41.149590 containerd[1474]: time="2025-09-13T00:09:41.148600307Z" level=warning msg="cleaning up after shim disconnected" id=26f710ced97485f6efacd14b7b234dd0efcb3228750e71f4ffe2056522e079ce namespace=k8s.io Sep 13 00:09:41.149590 containerd[1474]: time="2025-09-13T00:09:41.148615806Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:41.156206 systemd[1]: Created slice kubepods-besteffort-poda73c3ca8_469a_40ee_8971_a5fa4ecc6460.slice - libcontainer container kubepods-besteffort-poda73c3ca8_469a_40ee_8971_a5fa4ecc6460.slice. Sep 13 00:09:41.168257 systemd[1]: Created slice kubepods-burstable-poda4e7348d_2cc7_4002_b829_c570ed8d1df0.slice - libcontainer container kubepods-burstable-poda4e7348d_2cc7_4002_b829_c570ed8d1df0.slice. Sep 13 00:09:41.181464 systemd[1]: Created slice kubepods-besteffort-pod085df67f_1fcd_45a7_a238_37262e9dcfa2.slice - libcontainer container kubepods-besteffort-pod085df67f_1fcd_45a7_a238_37262e9dcfa2.slice. Sep 13 00:09:41.200077 systemd[1]: Created slice kubepods-besteffort-podb17de5d2_115f_45db_a278_f3d4d17bee79.slice - libcontainer container kubepods-besteffort-podb17de5d2_115f_45db_a278_f3d4d17bee79.slice. Sep 13 00:09:41.206107 containerd[1474]: time="2025-09-13T00:09:41.204850986Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:09:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:09:41.427510 containerd[1474]: time="2025-09-13T00:09:41.427433056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f56b585c-zr49b,Uid:925826ed-f268-4368-bf97-95adf0976969,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:09:41.431727 containerd[1474]: time="2025-09-13T00:09:41.431665842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7854f6d79d-nbc6q,Uid:bda40bd6-a908-46e8-9d69-02dbc2713a08,Namespace:calico-system,Attempt:0,}" Sep 13 00:09:41.437336 kubelet[2553]: E0913 00:09:41.437255 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:41.437950 containerd[1474]: time="2025-09-13T00:09:41.437907800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bqltl,Uid:60e91eea-bf80-4775-825e-60892cf59ae7,Namespace:kube-system,Attempt:0,}" Sep 13 00:09:41.463254 containerd[1474]: time="2025-09-13T00:09:41.463182368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f56b585c-gwf2q,Uid:a73c3ca8-469a-40ee-8971-a5fa4ecc6460,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:09:41.474043 kubelet[2553]: E0913 00:09:41.473928 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:41.474813 containerd[1474]: time="2025-09-13T00:09:41.474580916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q44l7,Uid:a4e7348d-2cc7-4002-b829-c570ed8d1df0,Namespace:kube-system,Attempt:0,}" Sep 13 00:09:41.489296 containerd[1474]: time="2025-09-13T00:09:41.489241518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-759dcc9f77-fl2jj,Uid:085df67f-1fcd-45a7-a238-37262e9dcfa2,Namespace:calico-system,Attempt:0,}" Sep 13 00:09:41.506720 containerd[1474]: time="2025-09-13T00:09:41.506460935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gg29f,Uid:b17de5d2-115f-45db-a278-f3d4d17bee79,Namespace:calico-system,Attempt:0,}" Sep 13 00:09:41.654915 containerd[1474]: time="2025-09-13T00:09:41.653670358Z" level=error msg="Failed to destroy network for sandbox \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.668343 containerd[1474]: time="2025-09-13T00:09:41.668278532Z" level=error msg="Failed to destroy network for sandbox \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.670470 containerd[1474]: time="2025-09-13T00:09:41.670421995Z" level=error msg="Failed to destroy network for sandbox \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.680399 containerd[1474]: time="2025-09-13T00:09:41.680043138Z" level=error msg="encountered an error cleaning up failed sandbox \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.680399 containerd[1474]: time="2025-09-13T00:09:41.680146962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bqltl,Uid:60e91eea-bf80-4775-825e-60892cf59ae7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.682356 containerd[1474]: time="2025-09-13T00:09:41.682180420Z" level=error msg="encountered an error cleaning up failed sandbox \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.682430 containerd[1474]: time="2025-09-13T00:09:41.682403748Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7854f6d79d-nbc6q,Uid:bda40bd6-a908-46e8-9d69-02dbc2713a08,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.682497 containerd[1474]: time="2025-09-13T00:09:41.682284204Z" level=error msg="encountered an error cleaning up failed sandbox \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.682537 containerd[1474]: time="2025-09-13T00:09:41.682502374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f56b585c-zr49b,Uid:925826ed-f268-4368-bf97-95adf0976969,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.690735 containerd[1474]: time="2025-09-13T00:09:41.689892097Z" level=error msg="Failed to destroy network for sandbox \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.690735 containerd[1474]: time="2025-09-13T00:09:41.690469410Z" level=error msg="encountered an error cleaning up failed sandbox \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.690735 containerd[1474]: time="2025-09-13T00:09:41.690529433Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f56b585c-gwf2q,Uid:a73c3ca8-469a-40ee-8971-a5fa4ecc6460,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.697626 kubelet[2553]: E0913 00:09:41.697495 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.697626 kubelet[2553]: E0913 00:09:41.697560 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.697626 kubelet[2553]: E0913 00:09:41.697569 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.697858 kubelet[2553]: E0913 00:09:41.697630 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f56b585c-zr49b" Sep 13 00:09:41.697858 kubelet[2553]: E0913 00:09:41.697646 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f56b585c-gwf2q" Sep 13 00:09:41.697858 kubelet[2553]: E0913 00:09:41.697669 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f56b585c-zr49b" Sep 13 00:09:41.697858 kubelet[2553]: E0913 00:09:41.697676 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f56b585c-gwf2q" Sep 13 00:09:41.698029 kubelet[2553]: E0913 00:09:41.697721 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76f56b585c-zr49b_calico-apiserver(925826ed-f268-4368-bf97-95adf0976969)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76f56b585c-zr49b_calico-apiserver(925826ed-f268-4368-bf97-95adf0976969)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76f56b585c-zr49b" podUID="925826ed-f268-4368-bf97-95adf0976969" Sep 13 00:09:41.698029 kubelet[2553]: E0913 00:09:41.697736 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76f56b585c-gwf2q_calico-apiserver(a73c3ca8-469a-40ee-8971-a5fa4ecc6460)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76f56b585c-gwf2q_calico-apiserver(a73c3ca8-469a-40ee-8971-a5fa4ecc6460)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76f56b585c-gwf2q" podUID="a73c3ca8-469a-40ee-8971-a5fa4ecc6460" Sep 13 00:09:41.698191 kubelet[2553]: E0913 00:09:41.697590 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7854f6d79d-nbc6q" Sep 13 00:09:41.698191 kubelet[2553]: E0913 00:09:41.697786 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7854f6d79d-nbc6q" Sep 13 00:09:41.698191 kubelet[2553]: E0913 00:09:41.697816 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7854f6d79d-nbc6q_calico-system(bda40bd6-a908-46e8-9d69-02dbc2713a08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7854f6d79d-nbc6q_calico-system(bda40bd6-a908-46e8-9d69-02dbc2713a08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7854f6d79d-nbc6q" podUID="bda40bd6-a908-46e8-9d69-02dbc2713a08" Sep 13 00:09:41.698304 kubelet[2553]: E0913 00:09:41.697493 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.698304 kubelet[2553]: E0913 00:09:41.697907 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bqltl" Sep 13 00:09:41.698304 kubelet[2553]: E0913 00:09:41.697929 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bqltl" Sep 13 00:09:41.698393 kubelet[2553]: E0913 00:09:41.697961 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-bqltl_kube-system(60e91eea-bf80-4775-825e-60892cf59ae7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-bqltl_kube-system(60e91eea-bf80-4775-825e-60892cf59ae7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bqltl" podUID="60e91eea-bf80-4775-825e-60892cf59ae7" Sep 13 00:09:41.727767 containerd[1474]: time="2025-09-13T00:09:41.727650801Z" level=error msg="Failed to destroy network for sandbox \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.728265 containerd[1474]: time="2025-09-13T00:09:41.728216012Z" level=error msg="encountered an error cleaning up failed sandbox \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.728327 containerd[1474]: time="2025-09-13T00:09:41.728290301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gg29f,Uid:b17de5d2-115f-45db-a278-f3d4d17bee79,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.729176 kubelet[2553]: E0913 00:09:41.728621 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.729176 kubelet[2553]: E0913 00:09:41.728722 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-gg29f" Sep 13 00:09:41.729176 kubelet[2553]: E0913 00:09:41.728754 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-gg29f" Sep 13 00:09:41.729326 kubelet[2553]: E0913 00:09:41.728811 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-gg29f_calico-system(b17de5d2-115f-45db-a278-f3d4d17bee79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-gg29f_calico-system(b17de5d2-115f-45db-a278-f3d4d17bee79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-gg29f" podUID="b17de5d2-115f-45db-a278-f3d4d17bee79" Sep 13 00:09:41.738720 containerd[1474]: time="2025-09-13T00:09:41.738672391Z" level=error msg="Failed to destroy network for sandbox \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.739206 containerd[1474]: time="2025-09-13T00:09:41.739174935Z" level=error msg="encountered an error cleaning up failed sandbox \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.739267 containerd[1474]: time="2025-09-13T00:09:41.739244655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-759dcc9f77-fl2jj,Uid:085df67f-1fcd-45a7-a238-37262e9dcfa2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.739482 kubelet[2553]: E0913 00:09:41.739446 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.739562 kubelet[2553]: E0913 00:09:41.739501 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-759dcc9f77-fl2jj" Sep 13 00:09:41.739562 kubelet[2553]: E0913 00:09:41.739519 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-759dcc9f77-fl2jj" Sep 13 00:09:41.739615 kubelet[2553]: E0913 00:09:41.739558 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-759dcc9f77-fl2jj_calico-system(085df67f-1fcd-45a7-a238-37262e9dcfa2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-759dcc9f77-fl2jj_calico-system(085df67f-1fcd-45a7-a238-37262e9dcfa2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-759dcc9f77-fl2jj" podUID="085df67f-1fcd-45a7-a238-37262e9dcfa2" Sep 13 00:09:41.743634 containerd[1474]: time="2025-09-13T00:09:41.743440512Z" level=error msg="Failed to destroy network for sandbox \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.743944 containerd[1474]: time="2025-09-13T00:09:41.743911035Z" level=error msg="encountered an error cleaning up failed sandbox \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.743980 containerd[1474]: time="2025-09-13T00:09:41.743962141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q44l7,Uid:a4e7348d-2cc7-4002-b829-c570ed8d1df0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.744275 kubelet[2553]: E0913 00:09:41.744249 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:41.744318 kubelet[2553]: E0913 00:09:41.744286 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q44l7" Sep 13 00:09:41.744318 kubelet[2553]: E0913 00:09:41.744303 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q44l7" Sep 13 00:09:41.744371 kubelet[2553]: E0913 00:09:41.744338 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-q44l7_kube-system(a4e7348d-2cc7-4002-b829-c570ed8d1df0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-q44l7_kube-system(a4e7348d-2cc7-4002-b829-c570ed8d1df0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q44l7" podUID="a4e7348d-2cc7-4002-b829-c570ed8d1df0" Sep 13 00:09:42.092028 systemd[1]: Created slice kubepods-besteffort-pod2983a866_de88_4651_bc70_4e6c5a764426.slice - libcontainer container kubepods-besteffort-pod2983a866_de88_4651_bc70_4e6c5a764426.slice. Sep 13 00:09:42.098062 containerd[1474]: time="2025-09-13T00:09:42.097985323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8tr4,Uid:2983a866-de88-4651-bc70-4e6c5a764426,Namespace:calico-system,Attempt:0,}" Sep 13 00:09:42.188843 kubelet[2553]: I0913 00:09:42.188770 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" Sep 13 00:09:42.192468 kubelet[2553]: I0913 00:09:42.192409 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Sep 13 00:09:42.210950 containerd[1474]: time="2025-09-13T00:09:42.210878481Z" level=error msg="Failed to destroy network for sandbox \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:42.212698 containerd[1474]: time="2025-09-13T00:09:42.211908635Z" level=error msg="encountered an error cleaning up failed sandbox \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:42.212698 containerd[1474]: time="2025-09-13T00:09:42.212074306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8tr4,Uid:2983a866-de88-4651-bc70-4e6c5a764426,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:42.213176 kubelet[2553]: E0913 00:09:42.213098 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:42.213808 kubelet[2553]: E0913 00:09:42.213647 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t8tr4" Sep 13 00:09:42.213808 kubelet[2553]: E0913 00:09:42.213694 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t8tr4" Sep 13 00:09:42.213808 kubelet[2553]: E0913 00:09:42.213761 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t8tr4_calico-system(2983a866-de88-4651-bc70-4e6c5a764426)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t8tr4_calico-system(2983a866-de88-4651-bc70-4e6c5a764426)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t8tr4" podUID="2983a866-de88-4651-bc70-4e6c5a764426" Sep 13 00:09:42.222097 kubelet[2553]: I0913 00:09:42.221351 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Sep 13 00:09:42.251041 kubelet[2553]: I0913 00:09:42.250969 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Sep 13 00:09:42.255681 kubelet[2553]: I0913 00:09:42.255057 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Sep 13 00:09:42.258473 kubelet[2553]: I0913 00:09:42.256728 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Sep 13 00:09:42.266117 containerd[1474]: time="2025-09-13T00:09:42.266027656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:09:42.267150 kubelet[2553]: I0913 00:09:42.266427 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Sep 13 00:09:42.275605 containerd[1474]: time="2025-09-13T00:09:42.274924917Z" level=info msg="StopPodSandbox for \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\"" Sep 13 00:09:42.275605 containerd[1474]: time="2025-09-13T00:09:42.275028702Z" level=info msg="StopPodSandbox for \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\"" Sep 13 00:09:42.275605 containerd[1474]: time="2025-09-13T00:09:42.275230511Z" level=info msg="Ensure that sandbox 27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e in task-service has been cleanup successfully" Sep 13 00:09:42.275605 containerd[1474]: time="2025-09-13T00:09:42.275337041Z" level=info msg="Ensure that sandbox a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1 in task-service has been cleanup successfully" Sep 13 00:09:42.275967 containerd[1474]: time="2025-09-13T00:09:42.275929473Z" level=info msg="StopPodSandbox for \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\"" Sep 13 00:09:42.276315 containerd[1474]: time="2025-09-13T00:09:42.276262939Z" level=info msg="Ensure that sandbox a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae in task-service has been cleanup successfully" Sep 13 00:09:42.278738 containerd[1474]: time="2025-09-13T00:09:42.278647666Z" level=info msg="StopPodSandbox for \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\"" Sep 13 00:09:42.279249 containerd[1474]: time="2025-09-13T00:09:42.279062334Z" level=info msg="Ensure that sandbox 5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b in task-service has been cleanup successfully" Sep 13 00:09:42.291928 containerd[1474]: time="2025-09-13T00:09:42.291740493Z" level=info msg="StopPodSandbox for \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\"" Sep 13 00:09:42.291928 containerd[1474]: time="2025-09-13T00:09:42.291914580Z" level=info msg="StopPodSandbox for \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\"" Sep 13 00:09:42.292231 containerd[1474]: time="2025-09-13T00:09:42.292066064Z" level=info msg="Ensure that sandbox 692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a in task-service has been cleanup successfully" Sep 13 00:09:42.292276 containerd[1474]: time="2025-09-13T00:09:42.292245761Z" level=info msg="Ensure that sandbox a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0 in task-service has been cleanup successfully" Sep 13 00:09:42.298438 containerd[1474]: time="2025-09-13T00:09:42.296796875Z" level=info msg="StopPodSandbox for \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\"" Sep 13 00:09:42.298438 containerd[1474]: time="2025-09-13T00:09:42.297080316Z" level=info msg="Ensure that sandbox 760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba in task-service has been cleanup successfully" Sep 13 00:09:42.373787 containerd[1474]: time="2025-09-13T00:09:42.372513177Z" level=error msg="StopPodSandbox for \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\" failed" error="failed to destroy network for sandbox \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:42.373787 containerd[1474]: time="2025-09-13T00:09:42.372523967Z" level=error msg="StopPodSandbox for \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\" failed" error="failed to destroy network for sandbox \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:42.374010 kubelet[2553]: E0913 00:09:42.372848 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Sep 13 00:09:42.374010 kubelet[2553]: E0913 00:09:42.372982 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1"} Sep 13 00:09:42.374010 kubelet[2553]: E0913 00:09:42.373061 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b17de5d2-115f-45db-a278-f3d4d17bee79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:09:42.374010 kubelet[2553]: E0913 00:09:42.373099 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b17de5d2-115f-45db-a278-f3d4d17bee79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-gg29f" podUID="b17de5d2-115f-45db-a278-f3d4d17bee79" Sep 13 00:09:42.374245 kubelet[2553]: E0913 00:09:42.373225 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Sep 13 00:09:42.374245 kubelet[2553]: E0913 00:09:42.373246 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0"} Sep 13 00:09:42.374245 kubelet[2553]: E0913 00:09:42.373268 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"925826ed-f268-4368-bf97-95adf0976969\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:09:42.374245 kubelet[2553]: E0913 00:09:42.373291 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"925826ed-f268-4368-bf97-95adf0976969\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76f56b585c-zr49b" podUID="925826ed-f268-4368-bf97-95adf0976969" Sep 13 00:09:42.374482 containerd[1474]: time="2025-09-13T00:09:42.374061293Z" level=error msg="StopPodSandbox for \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\" failed" error="failed to destroy network for sandbox \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:42.374517 kubelet[2553]: E0913 00:09:42.374270 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Sep 13 00:09:42.374517 kubelet[2553]: E0913 00:09:42.374328 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a"} Sep 13 00:09:42.374517 kubelet[2553]: E0913 00:09:42.374355 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4e7348d-2cc7-4002-b829-c570ed8d1df0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:09:42.374517 kubelet[2553]: E0913 00:09:42.374372 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4e7348d-2cc7-4002-b829-c570ed8d1df0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q44l7" podUID="a4e7348d-2cc7-4002-b829-c570ed8d1df0" Sep 13 00:09:42.385362 containerd[1474]: time="2025-09-13T00:09:42.385290712Z" level=error msg="StopPodSandbox for \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\" failed" error="failed to destroy network for sandbox \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:42.394201 containerd[1474]: time="2025-09-13T00:09:42.390384103Z" level=error msg="StopPodSandbox for \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\" failed" error="failed to destroy network for sandbox \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:42.394201 containerd[1474]: time="2025-09-13T00:09:42.391551855Z" level=error msg="StopPodSandbox for \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\" failed" error="failed to destroy network for sandbox \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:42.394201 containerd[1474]: time="2025-09-13T00:09:42.393066388Z" level=error msg="StopPodSandbox for \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\" failed" error="failed to destroy network for sandbox \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:42.394432 kubelet[2553]: E0913 00:09:42.390681 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" Sep 13 00:09:42.394432 kubelet[2553]: E0913 00:09:42.390749 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e"} Sep 13 00:09:42.394432 kubelet[2553]: E0913 00:09:42.390792 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60e91eea-bf80-4775-825e-60892cf59ae7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:09:42.394432 kubelet[2553]: E0913 00:09:42.390820 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60e91eea-bf80-4775-825e-60892cf59ae7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bqltl" podUID="60e91eea-bf80-4775-825e-60892cf59ae7" Sep 13 00:09:42.394658 kubelet[2553]: E0913 00:09:42.391670 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Sep 13 00:09:42.394658 kubelet[2553]: E0913 00:09:42.391694 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae"} Sep 13 00:09:42.394658 kubelet[2553]: E0913 00:09:42.391724 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"085df67f-1fcd-45a7-a238-37262e9dcfa2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:09:42.394658 kubelet[2553]: E0913 00:09:42.391749 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"085df67f-1fcd-45a7-a238-37262e9dcfa2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-759dcc9f77-fl2jj" podUID="085df67f-1fcd-45a7-a238-37262e9dcfa2" Sep 13 00:09:42.394833 kubelet[2553]: E0913 00:09:42.393262 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Sep 13 00:09:42.394833 kubelet[2553]: E0913 00:09:42.393358 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b"} Sep 13 00:09:42.394833 kubelet[2553]: E0913 00:09:42.393403 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bda40bd6-a908-46e8-9d69-02dbc2713a08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:09:42.394833 kubelet[2553]: E0913 00:09:42.393434 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bda40bd6-a908-46e8-9d69-02dbc2713a08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7854f6d79d-nbc6q" podUID="bda40bd6-a908-46e8-9d69-02dbc2713a08" Sep 13 00:09:42.395024 kubelet[2553]: E0913 00:09:42.394307 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Sep 13 00:09:42.395024 kubelet[2553]: E0913 00:09:42.394353 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba"} Sep 13 00:09:42.395024 kubelet[2553]: E0913 00:09:42.394377 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a73c3ca8-469a-40ee-8971-a5fa4ecc6460\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:09:42.395024 kubelet[2553]: E0913 00:09:42.394397 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a73c3ca8-469a-40ee-8971-a5fa4ecc6460\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76f56b585c-gwf2q" podUID="a73c3ca8-469a-40ee-8971-a5fa4ecc6460" Sep 13 00:09:42.397943 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e-shm.mount: Deactivated successfully. Sep 13 00:09:42.398111 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0-shm.mount: Deactivated successfully. Sep 13 00:09:42.398198 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b-shm.mount: Deactivated successfully. Sep 13 00:09:43.270288 kubelet[2553]: I0913 00:09:43.270240 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Sep 13 00:09:43.271015 containerd[1474]: time="2025-09-13T00:09:43.270934221Z" level=info msg="StopPodSandbox for \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\"" Sep 13 00:09:43.271335 containerd[1474]: time="2025-09-13T00:09:43.271181234Z" level=info msg="Ensure that sandbox da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766 in task-service has been cleanup successfully" Sep 13 00:09:43.302496 containerd[1474]: time="2025-09-13T00:09:43.302415512Z" level=error msg="StopPodSandbox for \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\" failed" error="failed to destroy network for sandbox \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:43.302802 kubelet[2553]: E0913 00:09:43.302716 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Sep 13 00:09:43.302857 kubelet[2553]: E0913 00:09:43.302817 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766"} Sep 13 00:09:43.302896 kubelet[2553]: E0913 00:09:43.302861 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2983a866-de88-4651-bc70-4e6c5a764426\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:09:43.303077 kubelet[2553]: E0913 00:09:43.302889 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2983a866-de88-4651-bc70-4e6c5a764426\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t8tr4" podUID="2983a866-de88-4651-bc70-4e6c5a764426" Sep 13 00:09:50.048086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3764264253.mount: Deactivated successfully. Sep 13 00:09:53.086534 containerd[1474]: time="2025-09-13T00:09:53.085448512Z" level=info msg="StopPodSandbox for \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\"" Sep 13 00:09:53.602810 containerd[1474]: time="2025-09-13T00:09:53.602709550Z" level=error msg="StopPodSandbox for \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\" failed" error="failed to destroy network for sandbox \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:09:53.608941 kubelet[2553]: E0913 00:09:53.603043 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" Sep 13 00:09:53.608941 kubelet[2553]: E0913 00:09:53.603112 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e"} Sep 13 00:09:53.608941 kubelet[2553]: E0913 00:09:53.603156 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60e91eea-bf80-4775-825e-60892cf59ae7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:09:53.608941 kubelet[2553]: E0913 00:09:53.603209 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60e91eea-bf80-4775-825e-60892cf59ae7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bqltl" podUID="60e91eea-bf80-4775-825e-60892cf59ae7" Sep 13 00:09:53.746135 containerd[1474]: time="2025-09-13T00:09:53.746044472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:09:53.751602 containerd[1474]: time="2025-09-13T00:09:53.751551775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 11.485461312s" Sep 13 00:09:53.751602 containerd[1474]: time="2025-09-13T00:09:53.751597882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:09:53.761353 containerd[1474]: time="2025-09-13T00:09:53.761280267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:53.762883 containerd[1474]: time="2025-09-13T00:09:53.762589875Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:53.764151 containerd[1474]: time="2025-09-13T00:09:53.763528567Z" level=info msg="CreateContainer within sandbox \"4c1463209d6904a980d2f150b9e48fb515680b9fe7fa7ca3cab3d61ab47c2a15\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:09:53.764151 containerd[1474]: time="2025-09-13T00:09:53.763732239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:53.803646 containerd[1474]: time="2025-09-13T00:09:53.803590292Z" level=info msg="CreateContainer within sandbox \"4c1463209d6904a980d2f150b9e48fb515680b9fe7fa7ca3cab3d61ab47c2a15\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a405efe26983aff12979b5dc8be10231af683a85fdbb6aa9e7db42860abfcdf3\"" Sep 13 00:09:53.804440 containerd[1474]: time="2025-09-13T00:09:53.804394812Z" level=info msg="StartContainer for \"a405efe26983aff12979b5dc8be10231af683a85fdbb6aa9e7db42860abfcdf3\"" Sep 13 00:09:53.877210 systemd[1]: Started cri-containerd-a405efe26983aff12979b5dc8be10231af683a85fdbb6aa9e7db42860abfcdf3.scope - libcontainer container a405efe26983aff12979b5dc8be10231af683a85fdbb6aa9e7db42860abfcdf3. Sep 13 00:09:53.916139 containerd[1474]: time="2025-09-13T00:09:53.916064531Z" level=info msg="StartContainer for \"a405efe26983aff12979b5dc8be10231af683a85fdbb6aa9e7db42860abfcdf3\" returns successfully" Sep 13 00:09:54.015587 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:09:54.016273 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:09:54.185159 containerd[1474]: time="2025-09-13T00:09:54.184766111Z" level=info msg="StopPodSandbox for \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\"" Sep 13 00:09:54.542881 containerd[1474]: 2025-09-13 00:09:54.254 [INFO][3860] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Sep 13 00:09:54.542881 containerd[1474]: 2025-09-13 00:09:54.254 [INFO][3860] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" iface="eth0" netns="/var/run/netns/cni-aa8f271d-aa65-4bbd-abb8-90c5c1f1c753" Sep 13 00:09:54.542881 containerd[1474]: 2025-09-13 00:09:54.256 [INFO][3860] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" iface="eth0" netns="/var/run/netns/cni-aa8f271d-aa65-4bbd-abb8-90c5c1f1c753" Sep 13 00:09:54.542881 containerd[1474]: 2025-09-13 00:09:54.256 [INFO][3860] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" iface="eth0" netns="/var/run/netns/cni-aa8f271d-aa65-4bbd-abb8-90c5c1f1c753" Sep 13 00:09:54.542881 containerd[1474]: 2025-09-13 00:09:54.256 [INFO][3860] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Sep 13 00:09:54.542881 containerd[1474]: 2025-09-13 00:09:54.256 [INFO][3860] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Sep 13 00:09:54.542881 containerd[1474]: 2025-09-13 00:09:54.338 [INFO][3869] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" HandleID="k8s-pod-network.a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Workload="localhost-k8s-whisker--759dcc9f77--fl2jj-eth0" Sep 13 00:09:54.542881 containerd[1474]: 2025-09-13 00:09:54.339 [INFO][3869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:09:54.542881 containerd[1474]: 2025-09-13 00:09:54.339 [INFO][3869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:09:54.542881 containerd[1474]: 2025-09-13 00:09:54.534 [WARNING][3869] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" HandleID="k8s-pod-network.a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Workload="localhost-k8s-whisker--759dcc9f77--fl2jj-eth0" Sep 13 00:09:54.542881 containerd[1474]: 2025-09-13 00:09:54.534 [INFO][3869] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" HandleID="k8s-pod-network.a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Workload="localhost-k8s-whisker--759dcc9f77--fl2jj-eth0" Sep 13 00:09:54.542881 containerd[1474]: 2025-09-13 00:09:54.536 [INFO][3869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:09:54.542881 containerd[1474]: 2025-09-13 00:09:54.539 [INFO][3860] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Sep 13 00:09:54.593457 containerd[1474]: time="2025-09-13T00:09:54.543022890Z" level=info msg="TearDown network for sandbox \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\" successfully" Sep 13 00:09:54.593457 containerd[1474]: time="2025-09-13T00:09:54.543054128Z" level=info msg="StopPodSandbox for \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\" returns successfully" Sep 13 00:09:54.727430 kubelet[2553]: I0913 00:09:54.727357 2553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/085df67f-1fcd-45a7-a238-37262e9dcfa2-whisker-backend-key-pair\") pod \"085df67f-1fcd-45a7-a238-37262e9dcfa2\" (UID: \"085df67f-1fcd-45a7-a238-37262e9dcfa2\") " Sep 13 00:09:54.727430 kubelet[2553]: I0913 00:09:54.727409 2553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbx9z\" (UniqueName: \"kubernetes.io/projected/085df67f-1fcd-45a7-a238-37262e9dcfa2-kube-api-access-hbx9z\") pod \"085df67f-1fcd-45a7-a238-37262e9dcfa2\" (UID: \"085df67f-1fcd-45a7-a238-37262e9dcfa2\") " Sep 13 00:09:54.727430 kubelet[2553]: I0913 00:09:54.727436 2553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/085df67f-1fcd-45a7-a238-37262e9dcfa2-whisker-ca-bundle\") pod \"085df67f-1fcd-45a7-a238-37262e9dcfa2\" (UID: \"085df67f-1fcd-45a7-a238-37262e9dcfa2\") " Sep 13 00:09:54.728080 kubelet[2553]: I0913 00:09:54.728041 2553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085df67f-1fcd-45a7-a238-37262e9dcfa2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "085df67f-1fcd-45a7-a238-37262e9dcfa2" (UID: "085df67f-1fcd-45a7-a238-37262e9dcfa2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:09:54.732859 kubelet[2553]: I0913 00:09:54.732813 2553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085df67f-1fcd-45a7-a238-37262e9dcfa2-kube-api-access-hbx9z" (OuterVolumeSpecName: "kube-api-access-hbx9z") pod "085df67f-1fcd-45a7-a238-37262e9dcfa2" (UID: "085df67f-1fcd-45a7-a238-37262e9dcfa2"). InnerVolumeSpecName "kube-api-access-hbx9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:09:54.733074 kubelet[2553]: I0913 00:09:54.733029 2553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085df67f-1fcd-45a7-a238-37262e9dcfa2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "085df67f-1fcd-45a7-a238-37262e9dcfa2" (UID: "085df67f-1fcd-45a7-a238-37262e9dcfa2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:09:54.761312 systemd[1]: run-netns-cni\x2daa8f271d\x2daa65\x2d4bbd\x2dabb8\x2d90c5c1f1c753.mount: Deactivated successfully. Sep 13 00:09:54.761453 systemd[1]: var-lib-kubelet-pods-085df67f\x2d1fcd\x2d45a7\x2da238\x2d37262e9dcfa2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhbx9z.mount: Deactivated successfully. Sep 13 00:09:54.761565 systemd[1]: var-lib-kubelet-pods-085df67f\x2d1fcd\x2d45a7\x2da238\x2d37262e9dcfa2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:09:54.828440 kubelet[2553]: I0913 00:09:54.828267 2553 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/085df67f-1fcd-45a7-a238-37262e9dcfa2-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:09:54.828440 kubelet[2553]: I0913 00:09:54.828306 2553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbx9z\" (UniqueName: \"kubernetes.io/projected/085df67f-1fcd-45a7-a238-37262e9dcfa2-kube-api-access-hbx9z\") on node \"localhost\" DevicePath \"\"" Sep 13 00:09:54.828440 kubelet[2553]: I0913 00:09:54.828409 2553 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/085df67f-1fcd-45a7-a238-37262e9dcfa2-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:09:54.868611 kubelet[2553]: I0913 00:09:54.867822 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nvnsw" podStartSLOduration=2.004453556 podStartE2EDuration="23.867800954s" podCreationTimestamp="2025-09-13 00:09:31 +0000 UTC" firstStartedPulling="2025-09-13 00:09:31.888943714 +0000 UTC m=+22.901105125" lastFinishedPulling="2025-09-13 00:09:53.752291112 +0000 UTC m=+44.764452523" observedRunningTime="2025-09-13 00:09:54.867507341 +0000 UTC m=+45.879668762" watchObservedRunningTime="2025-09-13 00:09:54.867800954 +0000 UTC m=+45.879962365" Sep 13 00:09:54.959860 systemd[1]: Removed slice kubepods-besteffort-pod085df67f_1fcd_45a7_a238_37262e9dcfa2.slice - libcontainer container kubepods-besteffort-pod085df67f_1fcd_45a7_a238_37262e9dcfa2.slice. Sep 13 00:09:55.086108 containerd[1474]: time="2025-09-13T00:09:55.085344560Z" level=info msg="StopPodSandbox for \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\"" Sep 13 00:09:55.086108 containerd[1474]: time="2025-09-13T00:09:55.085437941Z" level=info msg="StopPodSandbox for \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\"" Sep 13 00:09:55.086108 containerd[1474]: time="2025-09-13T00:09:55.085940807Z" level=info msg="StopPodSandbox for \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\"" Sep 13 00:09:56.085042 containerd[1474]: time="2025-09-13T00:09:56.084975400Z" level=info msg="StopPodSandbox for \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\"" Sep 13 00:09:56.220561 containerd[1474]: 2025-09-13 00:09:55.648 [INFO][3921] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Sep 13 00:09:56.220561 containerd[1474]: 2025-09-13 00:09:55.648 [INFO][3921] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" iface="eth0" netns="/var/run/netns/cni-74aeab22-53c8-8fd2-1106-5fe8303476f7" Sep 13 00:09:56.220561 containerd[1474]: 2025-09-13 00:09:55.648 [INFO][3921] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" iface="eth0" netns="/var/run/netns/cni-74aeab22-53c8-8fd2-1106-5fe8303476f7" Sep 13 00:09:56.220561 containerd[1474]: 2025-09-13 00:09:55.648 [INFO][3921] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" iface="eth0" netns="/var/run/netns/cni-74aeab22-53c8-8fd2-1106-5fe8303476f7" Sep 13 00:09:56.220561 containerd[1474]: 2025-09-13 00:09:55.648 [INFO][3921] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Sep 13 00:09:56.220561 containerd[1474]: 2025-09-13 00:09:55.648 [INFO][3921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Sep 13 00:09:56.220561 containerd[1474]: 2025-09-13 00:09:55.671 [INFO][3945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" HandleID="k8s-pod-network.a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Workload="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:09:56.220561 containerd[1474]: 2025-09-13 00:09:55.671 [INFO][3945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:09:56.220561 containerd[1474]: 2025-09-13 00:09:55.671 [INFO][3945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:09:56.220561 containerd[1474]: 2025-09-13 00:09:55.882 [WARNING][3945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" HandleID="k8s-pod-network.a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Workload="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:09:56.220561 containerd[1474]: 2025-09-13 00:09:55.882 [INFO][3945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" HandleID="k8s-pod-network.a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Workload="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:09:56.220561 containerd[1474]: 2025-09-13 00:09:56.204 [INFO][3945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:09:56.220561 containerd[1474]: 2025-09-13 00:09:56.212 [INFO][3921] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Sep 13 00:09:56.220962 containerd[1474]: time="2025-09-13T00:09:56.220643062Z" level=info msg="TearDown network for sandbox \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\" successfully" Sep 13 00:09:56.220962 containerd[1474]: time="2025-09-13T00:09:56.220667339Z" level=info msg="StopPodSandbox for \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\" returns successfully" Sep 13 00:09:56.224957 systemd[1]: run-netns-cni\x2d74aeab22\x2d53c8\x2d8fd2\x2d1106\x2d5fe8303476f7.mount: Deactivated successfully. Sep 13 00:09:56.226411 containerd[1474]: time="2025-09-13T00:09:56.226352251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f56b585c-zr49b,Uid:925826ed-f268-4368-bf97-95adf0976969,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:09:56.447254 systemd[1]: Started sshd@9-10.0.0.89:22-10.0.0.1:50544.service - OpenSSH per-connection server daemon (10.0.0.1:50544). Sep 13 00:09:56.461709 containerd[1474]: 2025-09-13 00:09:55.717 [INFO][3922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Sep 13 00:09:56.461709 containerd[1474]: 2025-09-13 00:09:55.717 [INFO][3922] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" iface="eth0" netns="/var/run/netns/cni-10c9818c-14d9-b5e2-715a-0c80ccb0287a" Sep 13 00:09:56.461709 containerd[1474]: 2025-09-13 00:09:55.717 [INFO][3922] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" iface="eth0" netns="/var/run/netns/cni-10c9818c-14d9-b5e2-715a-0c80ccb0287a" Sep 13 00:09:56.461709 containerd[1474]: 2025-09-13 00:09:55.717 [INFO][3922] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" iface="eth0" netns="/var/run/netns/cni-10c9818c-14d9-b5e2-715a-0c80ccb0287a" Sep 13 00:09:56.461709 containerd[1474]: 2025-09-13 00:09:55.717 [INFO][3922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Sep 13 00:09:56.461709 containerd[1474]: 2025-09-13 00:09:55.717 [INFO][3922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Sep 13 00:09:56.461709 containerd[1474]: 2025-09-13 00:09:55.739 [INFO][3955] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" HandleID="k8s-pod-network.760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Workload="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:09:56.461709 containerd[1474]: 2025-09-13 00:09:55.739 [INFO][3955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:09:56.461709 containerd[1474]: 2025-09-13 00:09:56.204 [INFO][3955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:09:56.461709 containerd[1474]: 2025-09-13 00:09:56.373 [WARNING][3955] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" HandleID="k8s-pod-network.760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Workload="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:09:56.461709 containerd[1474]: 2025-09-13 00:09:56.373 [INFO][3955] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" HandleID="k8s-pod-network.760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Workload="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:09:56.461709 containerd[1474]: 2025-09-13 00:09:56.453 [INFO][3955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:09:56.461709 containerd[1474]: 2025-09-13 00:09:56.459 [INFO][3922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Sep 13 00:09:56.462335 containerd[1474]: time="2025-09-13T00:09:56.462029864Z" level=info msg="TearDown network for sandbox \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\" successfully" Sep 13 00:09:56.462335 containerd[1474]: time="2025-09-13T00:09:56.462064371Z" level=info msg="StopPodSandbox for \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\" returns successfully" Sep 13 00:09:56.464759 containerd[1474]: time="2025-09-13T00:09:56.464721022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f56b585c-gwf2q,Uid:a73c3ca8-469a-40ee-8971-a5fa4ecc6460,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:09:56.465090 systemd[1]: run-netns-cni\x2d10c9818c\x2d14d9\x2db5e2\x2d715a\x2d0c80ccb0287a.mount: Deactivated successfully. Sep 13 00:09:56.634758 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 50544 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:09:56.637405 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:56.642612 systemd-logind[1449]: New session 10 of user core. Sep 13 00:09:56.648154 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:09:56.693883 containerd[1474]: 2025-09-13 00:09:55.884 [INFO][3923] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Sep 13 00:09:56.693883 containerd[1474]: 2025-09-13 00:09:55.884 [INFO][3923] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" iface="eth0" netns="/var/run/netns/cni-ba81815e-704a-5063-c008-c9b22828b70f" Sep 13 00:09:56.693883 containerd[1474]: 2025-09-13 00:09:55.885 [INFO][3923] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" iface="eth0" netns="/var/run/netns/cni-ba81815e-704a-5063-c008-c9b22828b70f" Sep 13 00:09:56.693883 containerd[1474]: 2025-09-13 00:09:55.885 [INFO][3923] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" iface="eth0" netns="/var/run/netns/cni-ba81815e-704a-5063-c008-c9b22828b70f" Sep 13 00:09:56.693883 containerd[1474]: 2025-09-13 00:09:55.885 [INFO][3923] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Sep 13 00:09:56.693883 containerd[1474]: 2025-09-13 00:09:55.885 [INFO][3923] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Sep 13 00:09:56.693883 containerd[1474]: 2025-09-13 00:09:55.923 [INFO][4052] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" HandleID="k8s-pod-network.692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Workload="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:09:56.693883 containerd[1474]: 2025-09-13 00:09:55.925 [INFO][4052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:09:56.693883 containerd[1474]: 2025-09-13 00:09:56.453 [INFO][4052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:09:56.693883 containerd[1474]: 2025-09-13 00:09:56.544 [WARNING][4052] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" HandleID="k8s-pod-network.692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Workload="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:09:56.693883 containerd[1474]: 2025-09-13 00:09:56.544 [INFO][4052] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" HandleID="k8s-pod-network.692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Workload="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:09:56.693883 containerd[1474]: 2025-09-13 00:09:56.664 [INFO][4052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:09:56.693883 containerd[1474]: 2025-09-13 00:09:56.688 [INFO][3923] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Sep 13 00:09:56.718263 kubelet[2553]: E0913 00:09:56.694671 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:56.697469 systemd[1]: run-netns-cni\x2dba81815e\x2d704a\x2d5063\x2dc008\x2dc9b22828b70f.mount: Deactivated successfully. Sep 13 00:09:56.718654 containerd[1474]: time="2025-09-13T00:09:56.694296644Z" level=info msg="TearDown network for sandbox \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\" successfully" Sep 13 00:09:56.718654 containerd[1474]: time="2025-09-13T00:09:56.694333806Z" level=info msg="StopPodSandbox for \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\" returns successfully" Sep 13 00:09:56.718654 containerd[1474]: time="2025-09-13T00:09:56.698587453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q44l7,Uid:a4e7348d-2cc7-4002-b829-c570ed8d1df0,Namespace:kube-system,Attempt:1,}" Sep 13 00:09:56.707099 systemd[1]: Created slice kubepods-besteffort-pode31d705d_f18a_4056_9103_cd026858c652.slice - libcontainer container kubepods-besteffort-pode31d705d_f18a_4056_9103_cd026858c652.slice. Sep 13 00:09:56.840269 kubelet[2553]: I0913 00:09:56.840122 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e31d705d-f18a-4056-9103-cd026858c652-whisker-backend-key-pair\") pod \"whisker-6b6bbf4cfb-7ddpj\" (UID: \"e31d705d-f18a-4056-9103-cd026858c652\") " pod="calico-system/whisker-6b6bbf4cfb-7ddpj" Sep 13 00:09:56.840269 kubelet[2553]: I0913 00:09:56.840173 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx79n\" (UniqueName: \"kubernetes.io/projected/e31d705d-f18a-4056-9103-cd026858c652-kube-api-access-gx79n\") pod \"whisker-6b6bbf4cfb-7ddpj\" (UID: \"e31d705d-f18a-4056-9103-cd026858c652\") " pod="calico-system/whisker-6b6bbf4cfb-7ddpj" Sep 13 00:09:56.840269 kubelet[2553]: I0913 00:09:56.840190 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e31d705d-f18a-4056-9103-cd026858c652-whisker-ca-bundle\") pod \"whisker-6b6bbf4cfb-7ddpj\" (UID: \"e31d705d-f18a-4056-9103-cd026858c652\") " pod="calico-system/whisker-6b6bbf4cfb-7ddpj" Sep 13 00:09:56.885065 containerd[1474]: 2025-09-13 00:09:56.372 [INFO][4083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Sep 13 00:09:56.885065 containerd[1474]: 2025-09-13 00:09:56.372 [INFO][4083] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" iface="eth0" netns="/var/run/netns/cni-45551b24-382d-c987-3659-f872cee103ec" Sep 13 00:09:56.885065 containerd[1474]: 2025-09-13 00:09:56.374 [INFO][4083] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" iface="eth0" netns="/var/run/netns/cni-45551b24-382d-c987-3659-f872cee103ec" Sep 13 00:09:56.885065 containerd[1474]: 2025-09-13 00:09:56.374 [INFO][4083] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" iface="eth0" netns="/var/run/netns/cni-45551b24-382d-c987-3659-f872cee103ec" Sep 13 00:09:56.885065 containerd[1474]: 2025-09-13 00:09:56.374 [INFO][4083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Sep 13 00:09:56.885065 containerd[1474]: 2025-09-13 00:09:56.374 [INFO][4083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Sep 13 00:09:56.885065 containerd[1474]: 2025-09-13 00:09:56.411 [INFO][4092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" HandleID="k8s-pod-network.a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Workload="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:09:56.885065 containerd[1474]: 2025-09-13 00:09:56.411 [INFO][4092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:09:56.885065 containerd[1474]: 2025-09-13 00:09:56.664 [INFO][4092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:09:56.885065 containerd[1474]: 2025-09-13 00:09:56.815 [WARNING][4092] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" HandleID="k8s-pod-network.a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Workload="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:09:56.885065 containerd[1474]: 2025-09-13 00:09:56.815 [INFO][4092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" HandleID="k8s-pod-network.a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Workload="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:09:56.885065 containerd[1474]: 2025-09-13 00:09:56.857 [INFO][4092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:09:56.885065 containerd[1474]: 2025-09-13 00:09:56.872 [INFO][4083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Sep 13 00:09:56.885065 containerd[1474]: time="2025-09-13T00:09:56.884360972Z" level=info msg="TearDown network for sandbox \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\" successfully" Sep 13 00:09:56.885065 containerd[1474]: time="2025-09-13T00:09:56.884390269Z" level=info msg="StopPodSandbox for \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\" returns successfully" Sep 13 00:09:56.889041 containerd[1474]: time="2025-09-13T00:09:56.886945362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gg29f,Uid:b17de5d2-115f-45db-a278-f3d4d17bee79,Namespace:calico-system,Attempt:1,}" Sep 13 00:09:56.970847 kubelet[2553]: I0913 00:09:56.970233 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:09:56.988498 sshd[4100]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:56.994461 systemd[1]: sshd@9-10.0.0.89:22-10.0.0.1:50544.service: Deactivated successfully. Sep 13 00:09:57.000830 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:09:57.008625 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:09:57.011193 systemd-logind[1449]: Removed session 10. Sep 13 00:09:57.024376 containerd[1474]: time="2025-09-13T00:09:57.023936892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b6bbf4cfb-7ddpj,Uid:e31d705d-f18a-4056-9103-cd026858c652,Namespace:calico-system,Attempt:0,}" Sep 13 00:09:57.029352 systemd-networkd[1398]: cali16f1f4fff48: Link UP Sep 13 00:09:57.030692 systemd-networkd[1398]: cali16f1f4fff48: Gained carrier Sep 13 00:09:57.085341 containerd[1474]: time="2025-09-13T00:09:57.085285577Z" level=info msg="StopPodSandbox for \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\"" Sep 13 00:09:57.086165 containerd[1474]: time="2025-09-13T00:09:57.086044828Z" level=info msg="StopPodSandbox for \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\"" Sep 13 00:09:57.088817 kubelet[2553]: I0913 00:09:57.088752 2553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085df67f-1fcd-45a7-a238-37262e9dcfa2" path="/var/lib/kubelet/pods/085df67f-1fcd-45a7-a238-37262e9dcfa2/volumes" Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.723 [INFO][4104] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.818 [INFO][4104] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0 calico-apiserver-76f56b585c- calico-apiserver 925826ed-f268-4368-bf97-95adf0976969 964 0 2025-09-13 00:09:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76f56b585c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76f56b585c-zr49b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali16f1f4fff48 [] [] }} ContainerID="f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-zr49b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--zr49b-" Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.818 [INFO][4104] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-zr49b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.919 [INFO][4129] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" HandleID="k8s-pod-network.f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" Workload="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.919 [INFO][4129] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" HandleID="k8s-pod-network.f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" Workload="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bc2a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76f56b585c-zr49b", "timestamp":"2025-09-13 00:09:56.919317726 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.921 [INFO][4129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.921 [INFO][4129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.922 [INFO][4129] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.933 [INFO][4129] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" host="localhost" Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.954 [INFO][4129] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.970 [INFO][4129] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.979 [INFO][4129] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.981 [INFO][4129] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.981 [INFO][4129] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" host="localhost" Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.983 [INFO][4129] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21 Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.990 [INFO][4129] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" host="localhost" Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.997 [INFO][4129] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" host="localhost" Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.998 [INFO][4129] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" host="localhost" Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.998 [INFO][4129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:09:57.115293 containerd[1474]: 2025-09-13 00:09:56.998 [INFO][4129] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" HandleID="k8s-pod-network.f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" Workload="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:09:57.115920 containerd[1474]: 2025-09-13 00:09:57.005 [INFO][4104] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-zr49b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0", GenerateName:"calico-apiserver-76f56b585c-", Namespace:"calico-apiserver", SelfLink:"", UID:"925826ed-f268-4368-bf97-95adf0976969", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f56b585c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76f56b585c-zr49b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16f1f4fff48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:09:57.115920 containerd[1474]: 2025-09-13 00:09:57.009 [INFO][4104] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-zr49b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:09:57.115920 containerd[1474]: 2025-09-13 00:09:57.010 [INFO][4104] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16f1f4fff48 ContainerID="f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-zr49b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:09:57.115920 containerd[1474]: 2025-09-13 00:09:57.033 [INFO][4104] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-zr49b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:09:57.115920 containerd[1474]: 2025-09-13 00:09:57.034 [INFO][4104] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-zr49b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0", GenerateName:"calico-apiserver-76f56b585c-", Namespace:"calico-apiserver", SelfLink:"", UID:"925826ed-f268-4368-bf97-95adf0976969", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f56b585c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21", Pod:"calico-apiserver-76f56b585c-zr49b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16f1f4fff48", MAC:"4a:26:c1:f5:fa:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:09:57.115920 containerd[1474]: 2025-09-13 00:09:57.109 [INFO][4104] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-zr49b" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:09:57.170563 containerd[1474]: time="2025-09-13T00:09:57.169289667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:57.170563 containerd[1474]: time="2025-09-13T00:09:57.170139463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:57.170563 containerd[1474]: time="2025-09-13T00:09:57.170153560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:57.170563 containerd[1474]: time="2025-09-13T00:09:57.170257843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:57.208400 systemd-networkd[1398]: cali92b8ab1575d: Link UP Sep 13 00:09:57.212448 systemd-networkd[1398]: cali92b8ab1575d: Gained carrier Sep 13 00:09:57.213538 systemd[1]: Started cri-containerd-f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21.scope - libcontainer container f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21. Sep 13 00:09:57.239142 systemd[1]: run-netns-cni\x2d45551b24\x2d382d\x2dc987\x2d3659\x2df872cee103ec.mount: Deactivated successfully. Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:56.945 [INFO][4137] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:56.967 [INFO][4137] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0 calico-apiserver-76f56b585c- calico-apiserver a73c3ca8-469a-40ee-8971-a5fa4ecc6460 968 0 2025-09-13 00:09:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76f56b585c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76f56b585c-gwf2q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali92b8ab1575d [] [] }} ContainerID="98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-gwf2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-" Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:56.967 [INFO][4137] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-gwf2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.003 [INFO][4189] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" HandleID="k8s-pod-network.98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" Workload="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.003 [INFO][4189] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" HandleID="k8s-pod-network.98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" Workload="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e0c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76f56b585c-gwf2q", "timestamp":"2025-09-13 00:09:57.003433416 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.003 [INFO][4189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.004 [INFO][4189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.004 [INFO][4189] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.036 [INFO][4189] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" host="localhost" Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.046 [INFO][4189] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.114 [INFO][4189] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.119 [INFO][4189] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.121 [INFO][4189] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.121 [INFO][4189] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" host="localhost" Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.123 [INFO][4189] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.133 [INFO][4189] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" host="localhost" Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.148 [INFO][4189] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" host="localhost" Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.148 [INFO][4189] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" host="localhost" Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.148 [INFO][4189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:09:57.276601 containerd[1474]: 2025-09-13 00:09:57.148 [INFO][4189] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" HandleID="k8s-pod-network.98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" Workload="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:09:57.277397 containerd[1474]: 2025-09-13 00:09:57.196 [INFO][4137] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-gwf2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0", GenerateName:"calico-apiserver-76f56b585c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a73c3ca8-469a-40ee-8971-a5fa4ecc6460", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f56b585c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76f56b585c-gwf2q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92b8ab1575d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:09:57.277397 containerd[1474]: 2025-09-13 00:09:57.198 [INFO][4137] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-gwf2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:09:57.277397 containerd[1474]: 2025-09-13 00:09:57.198 [INFO][4137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92b8ab1575d ContainerID="98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-gwf2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:09:57.277397 containerd[1474]: 2025-09-13 00:09:57.207 [INFO][4137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-gwf2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:09:57.277397 containerd[1474]: 2025-09-13 00:09:57.210 [INFO][4137] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-gwf2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0", GenerateName:"calico-apiserver-76f56b585c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a73c3ca8-469a-40ee-8971-a5fa4ecc6460", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f56b585c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba", Pod:"calico-apiserver-76f56b585c-gwf2q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92b8ab1575d", MAC:"a2:2b:e1:56:41:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:09:57.277397 containerd[1474]: 2025-09-13 00:09:57.263 [INFO][4137] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba" Namespace="calico-apiserver" Pod="calico-apiserver-76f56b585c-gwf2q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:09:57.291607 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:09:57.422039 systemd-networkd[1398]: calid054275f013: Link UP Sep 13 00:09:57.423801 systemd-networkd[1398]: calid054275f013: Gained carrier Sep 13 00:09:57.427784 containerd[1474]: time="2025-09-13T00:09:57.426769799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f56b585c-zr49b,Uid:925826ed-f268-4368-bf97-95adf0976969,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21\"" Sep 13 00:09:57.433959 containerd[1474]: time="2025-09-13T00:09:57.433676607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:09:57.449121 containerd[1474]: time="2025-09-13T00:09:57.447039011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:57.449121 containerd[1474]: time="2025-09-13T00:09:57.447130568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:57.449121 containerd[1474]: time="2025-09-13T00:09:57.447145377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:57.449121 containerd[1474]: time="2025-09-13T00:09:57.447269688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:56.941 [INFO][4149] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:56.961 [INFO][4149] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0 coredns-7c65d6cfc9- kube-system a4e7348d-2cc7-4002-b829-c570ed8d1df0 969 0 2025-09-13 00:09:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-q44l7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid054275f013 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q44l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q44l7-" Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:56.961 [INFO][4149] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q44l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.021 [INFO][4182] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" HandleID="k8s-pod-network.d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" Workload="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.022 [INFO][4182] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" HandleID="k8s-pod-network.d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" Workload="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f860), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-q44l7", "timestamp":"2025-09-13 00:09:57.021668598 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.022 [INFO][4182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.150 [INFO][4182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.150 [INFO][4182] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.181 [INFO][4182] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" host="localhost" Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.197 [INFO][4182] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.272 [INFO][4182] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.276 [INFO][4182] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.279 [INFO][4182] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.281 [INFO][4182] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" host="localhost" Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.284 [INFO][4182] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.314 [INFO][4182] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" host="localhost" Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.399 [INFO][4182] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" host="localhost" Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.399 [INFO][4182] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" host="localhost" Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.399 [INFO][4182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:09:57.451754 containerd[1474]: 2025-09-13 00:09:57.399 [INFO][4182] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" HandleID="k8s-pod-network.d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" Workload="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:09:57.452295 containerd[1474]: 2025-09-13 00:09:57.412 [INFO][4149] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q44l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a4e7348d-2cc7-4002-b829-c570ed8d1df0", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-q44l7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid054275f013", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:09:57.452295 containerd[1474]: 2025-09-13 00:09:57.412 [INFO][4149] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q44l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:09:57.452295 containerd[1474]: 2025-09-13 00:09:57.412 [INFO][4149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid054275f013 ContainerID="d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q44l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:09:57.452295 containerd[1474]: 2025-09-13 00:09:57.425 [INFO][4149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q44l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:09:57.452295 containerd[1474]: 2025-09-13 00:09:57.429 [INFO][4149] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q44l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a4e7348d-2cc7-4002-b829-c570ed8d1df0", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f", Pod:"coredns-7c65d6cfc9-q44l7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid054275f013", MAC:"8e:f5:51:bf:46:4b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:09:57.452295 containerd[1474]: 2025-09-13 00:09:57.444 [INFO][4149] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q44l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:09:57.485361 systemd[1]: Started cri-containerd-98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba.scope - libcontainer container 98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba. Sep 13 00:09:57.492343 containerd[1474]: time="2025-09-13T00:09:57.492082082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:57.492343 containerd[1474]: time="2025-09-13T00:09:57.492142819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:57.492343 containerd[1474]: time="2025-09-13T00:09:57.492161075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:57.492343 containerd[1474]: time="2025-09-13T00:09:57.492251970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:57.496638 systemd-networkd[1398]: calidbaf68abd87: Link UP Sep 13 00:09:57.500338 systemd-networkd[1398]: calidbaf68abd87: Gained carrier Sep 13 00:09:57.524705 containerd[1474]: 2025-09-13 00:09:57.176 [INFO][4236] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Sep 13 00:09:57.524705 containerd[1474]: 2025-09-13 00:09:57.183 [INFO][4236] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" iface="eth0" netns="/var/run/netns/cni-731ab2bd-2984-a7b0-6946-4b2fc9607a1e" Sep 13 00:09:57.524705 containerd[1474]: 2025-09-13 00:09:57.190 [INFO][4236] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" iface="eth0" netns="/var/run/netns/cni-731ab2bd-2984-a7b0-6946-4b2fc9607a1e" Sep 13 00:09:57.524705 containerd[1474]: 2025-09-13 00:09:57.192 [INFO][4236] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" iface="eth0" netns="/var/run/netns/cni-731ab2bd-2984-a7b0-6946-4b2fc9607a1e" Sep 13 00:09:57.524705 containerd[1474]: 2025-09-13 00:09:57.192 [INFO][4236] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Sep 13 00:09:57.524705 containerd[1474]: 2025-09-13 00:09:57.192 [INFO][4236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Sep 13 00:09:57.524705 containerd[1474]: 2025-09-13 00:09:57.296 [INFO][4326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" HandleID="k8s-pod-network.5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Workload="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:09:57.524705 containerd[1474]: 2025-09-13 00:09:57.296 [INFO][4326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:09:57.524705 containerd[1474]: 2025-09-13 00:09:57.474 [INFO][4326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:09:57.524705 containerd[1474]: 2025-09-13 00:09:57.504 [WARNING][4326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" HandleID="k8s-pod-network.5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Workload="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:09:57.524705 containerd[1474]: 2025-09-13 00:09:57.504 [INFO][4326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" HandleID="k8s-pod-network.5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Workload="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:09:57.524705 containerd[1474]: 2025-09-13 00:09:57.507 [INFO][4326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:09:57.524705 containerd[1474]: 2025-09-13 00:09:57.514 [INFO][4236] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Sep 13 00:09:57.525490 containerd[1474]: time="2025-09-13T00:09:57.525456931Z" level=info msg="TearDown network for sandbox \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\" successfully" Sep 13 00:09:57.525585 containerd[1474]: time="2025-09-13T00:09:57.525565771Z" level=info msg="StopPodSandbox for \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\" returns successfully" Sep 13 00:09:57.526968 containerd[1474]: time="2025-09-13T00:09:57.526941526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7854f6d79d-nbc6q,Uid:bda40bd6-a908-46e8-9d69-02dbc2713a08,Namespace:calico-system,Attempt:1,}" Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.015 [INFO][4172] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.042 [INFO][4172] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--gg29f-eth0 goldmane-7988f88666- calico-system b17de5d2-115f-45db-a278-f3d4d17bee79 975 0 2025-09-13 00:09:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-gg29f eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidbaf68abd87 [] [] }} ContainerID="4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" Namespace="calico-system" Pod="goldmane-7988f88666-gg29f" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gg29f-" Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.042 [INFO][4172] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" Namespace="calico-system" Pod="goldmane-7988f88666-gg29f" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.263 [INFO][4257] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" HandleID="k8s-pod-network.4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" Workload="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.265 [INFO][4257] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" HandleID="k8s-pod-network.4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" Workload="localhost-k8s-goldmane--7988f88666--gg29f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000590e60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-gg29f", "timestamp":"2025-09-13 00:09:57.263362951 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.266 [INFO][4257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.399 [INFO][4257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.400 [INFO][4257] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.413 [INFO][4257] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" host="localhost" Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.423 [INFO][4257] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.430 [INFO][4257] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.437 [INFO][4257] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.447 [INFO][4257] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.447 [INFO][4257] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" host="localhost" Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.450 [INFO][4257] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0 Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.460 [INFO][4257] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" host="localhost" Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.474 [INFO][4257] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" host="localhost" Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.474 [INFO][4257] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" host="localhost" Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.474 [INFO][4257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:09:57.527525 containerd[1474]: 2025-09-13 00:09:57.474 [INFO][4257] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" HandleID="k8s-pod-network.4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" Workload="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:09:57.528340 containerd[1474]: 2025-09-13 00:09:57.478 [INFO][4172] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" Namespace="calico-system" Pod="goldmane-7988f88666-gg29f" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gg29f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--gg29f-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b17de5d2-115f-45db-a278-f3d4d17bee79", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-gg29f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidbaf68abd87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:09:57.528340 containerd[1474]: 2025-09-13 00:09:57.479 [INFO][4172] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" Namespace="calico-system" Pod="goldmane-7988f88666-gg29f" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:09:57.528340 containerd[1474]: 2025-09-13 00:09:57.479 [INFO][4172] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidbaf68abd87 ContainerID="4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" Namespace="calico-system" Pod="goldmane-7988f88666-gg29f" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:09:57.528340 containerd[1474]: 2025-09-13 00:09:57.499 [INFO][4172] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" Namespace="calico-system" Pod="goldmane-7988f88666-gg29f" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:09:57.528340 containerd[1474]: 2025-09-13 00:09:57.499 [INFO][4172] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" Namespace="calico-system" Pod="goldmane-7988f88666-gg29f" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gg29f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--gg29f-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b17de5d2-115f-45db-a278-f3d4d17bee79", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0", Pod:"goldmane-7988f88666-gg29f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidbaf68abd87", MAC:"92:e9:a1:86:82:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:09:57.528340 containerd[1474]: 2025-09-13 00:09:57.523 [INFO][4172] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0" Namespace="calico-system" Pod="goldmane-7988f88666-gg29f" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:09:57.531350 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:09:57.538534 systemd[1]: Started cri-containerd-d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f.scope - libcontainer container d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f. Sep 13 00:09:57.553140 containerd[1474]: 2025-09-13 00:09:57.189 [INFO][4237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Sep 13 00:09:57.553140 containerd[1474]: 2025-09-13 00:09:57.189 [INFO][4237] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" iface="eth0" netns="/var/run/netns/cni-ec3e1c8f-15ed-5057-67c8-3aa2939170d7" Sep 13 00:09:57.553140 containerd[1474]: 2025-09-13 00:09:57.190 [INFO][4237] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" iface="eth0" netns="/var/run/netns/cni-ec3e1c8f-15ed-5057-67c8-3aa2939170d7" Sep 13 00:09:57.553140 containerd[1474]: 2025-09-13 00:09:57.190 [INFO][4237] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" iface="eth0" netns="/var/run/netns/cni-ec3e1c8f-15ed-5057-67c8-3aa2939170d7" Sep 13 00:09:57.553140 containerd[1474]: 2025-09-13 00:09:57.190 [INFO][4237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Sep 13 00:09:57.553140 containerd[1474]: 2025-09-13 00:09:57.190 [INFO][4237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Sep 13 00:09:57.553140 containerd[1474]: 2025-09-13 00:09:57.300 [INFO][4323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" HandleID="k8s-pod-network.da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Workload="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:09:57.553140 containerd[1474]: 2025-09-13 00:09:57.300 [INFO][4323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:09:57.553140 containerd[1474]: 2025-09-13 00:09:57.507 [INFO][4323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:09:57.553140 containerd[1474]: 2025-09-13 00:09:57.527 [WARNING][4323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" HandleID="k8s-pod-network.da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Workload="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:09:57.553140 containerd[1474]: 2025-09-13 00:09:57.528 [INFO][4323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" HandleID="k8s-pod-network.da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Workload="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:09:57.553140 containerd[1474]: 2025-09-13 00:09:57.533 [INFO][4323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:09:57.553140 containerd[1474]: 2025-09-13 00:09:57.543 [INFO][4237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Sep 13 00:09:57.553596 containerd[1474]: time="2025-09-13T00:09:57.553334340Z" level=info msg="TearDown network for sandbox \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\" successfully" Sep 13 00:09:57.553596 containerd[1474]: time="2025-09-13T00:09:57.553366802Z" level=info msg="StopPodSandbox for \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\" returns successfully" Sep 13 00:09:57.554302 containerd[1474]: time="2025-09-13T00:09:57.554265223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8tr4,Uid:2983a866-de88-4651-bc70-4e6c5a764426,Namespace:calico-system,Attempt:1,}" Sep 13 00:09:57.564422 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:09:57.592598 containerd[1474]: time="2025-09-13T00:09:57.592538381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f56b585c-gwf2q,Uid:a73c3ca8-469a-40ee-8971-a5fa4ecc6460,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba\"" Sep 13 00:09:57.603319 containerd[1474]: time="2025-09-13T00:09:57.603121621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:57.603579 containerd[1474]: time="2025-09-13T00:09:57.603236894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:57.603579 containerd[1474]: time="2025-09-13T00:09:57.603382706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:57.604659 containerd[1474]: time="2025-09-13T00:09:57.603930698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:57.616707 containerd[1474]: time="2025-09-13T00:09:57.616633123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q44l7,Uid:a4e7348d-2cc7-4002-b829-c570ed8d1df0,Namespace:kube-system,Attempt:1,} returns sandbox id \"d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f\"" Sep 13 00:09:57.619271 kubelet[2553]: E0913 00:09:57.619219 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:57.622715 containerd[1474]: time="2025-09-13T00:09:57.622653405Z" level=info msg="CreateContainer within sandbox \"d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:09:57.633111 systemd-networkd[1398]: cali35c291d975a: Link UP Sep 13 00:09:57.639936 systemd-networkd[1398]: cali35c291d975a: Gained carrier Sep 13 00:09:57.653283 systemd[1]: Started cri-containerd-4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0.scope - libcontainer container 4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0. Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.194 [INFO][4298] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.271 [INFO][4298] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6b6bbf4cfb--7ddpj-eth0 whisker-6b6bbf4cfb- calico-system e31d705d-f18a-4056-9103-cd026858c652 996 0 2025-09-13 00:09:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b6bbf4cfb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6b6bbf4cfb-7ddpj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali35c291d975a [] [] }} ContainerID="f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" Namespace="calico-system" Pod="whisker-6b6bbf4cfb-7ddpj" WorkloadEndpoint="localhost-k8s-whisker--6b6bbf4cfb--7ddpj-" Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.271 [INFO][4298] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" Namespace="calico-system" Pod="whisker-6b6bbf4cfb-7ddpj" WorkloadEndpoint="localhost-k8s-whisker--6b6bbf4cfb--7ddpj-eth0" Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.433 [INFO][4370] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" HandleID="k8s-pod-network.f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" Workload="localhost-k8s-whisker--6b6bbf4cfb--7ddpj-eth0" Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.434 [INFO][4370] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" HandleID="k8s-pod-network.f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" Workload="localhost-k8s-whisker--6b6bbf4cfb--7ddpj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002add00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6b6bbf4cfb-7ddpj", "timestamp":"2025-09-13 00:09:57.433799395 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.434 [INFO][4370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.533 [INFO][4370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.534 [INFO][4370] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.552 [INFO][4370] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" host="localhost" Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.561 [INFO][4370] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.572 [INFO][4370] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.575 [INFO][4370] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.579 [INFO][4370] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.579 [INFO][4370] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" host="localhost" Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.584 [INFO][4370] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.593 [INFO][4370] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" host="localhost" Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.612 [INFO][4370] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" host="localhost" Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.613 [INFO][4370] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" host="localhost" Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.613 [INFO][4370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:09:57.678025 containerd[1474]: 2025-09-13 00:09:57.613 [INFO][4370] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" HandleID="k8s-pod-network.f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" Workload="localhost-k8s-whisker--6b6bbf4cfb--7ddpj-eth0" Sep 13 00:09:57.678848 containerd[1474]: 2025-09-13 00:09:57.621 [INFO][4298] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" Namespace="calico-system" Pod="whisker-6b6bbf4cfb-7ddpj" WorkloadEndpoint="localhost-k8s-whisker--6b6bbf4cfb--7ddpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6b6bbf4cfb--7ddpj-eth0", GenerateName:"whisker-6b6bbf4cfb-", Namespace:"calico-system", SelfLink:"", UID:"e31d705d-f18a-4056-9103-cd026858c652", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b6bbf4cfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6b6bbf4cfb-7ddpj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali35c291d975a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:09:57.678848 containerd[1474]: 2025-09-13 00:09:57.622 [INFO][4298] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" Namespace="calico-system" Pod="whisker-6b6bbf4cfb-7ddpj" WorkloadEndpoint="localhost-k8s-whisker--6b6bbf4cfb--7ddpj-eth0" Sep 13 00:09:57.678848 containerd[1474]: 2025-09-13 00:09:57.622 [INFO][4298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35c291d975a ContainerID="f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" Namespace="calico-system" Pod="whisker-6b6bbf4cfb-7ddpj" WorkloadEndpoint="localhost-k8s-whisker--6b6bbf4cfb--7ddpj-eth0" Sep 13 00:09:57.678848 containerd[1474]: 2025-09-13 00:09:57.641 [INFO][4298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" Namespace="calico-system" Pod="whisker-6b6bbf4cfb-7ddpj" WorkloadEndpoint="localhost-k8s-whisker--6b6bbf4cfb--7ddpj-eth0" Sep 13 00:09:57.678848 containerd[1474]: 2025-09-13 00:09:57.642 [INFO][4298] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" Namespace="calico-system" Pod="whisker-6b6bbf4cfb-7ddpj" WorkloadEndpoint="localhost-k8s-whisker--6b6bbf4cfb--7ddpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6b6bbf4cfb--7ddpj-eth0", GenerateName:"whisker-6b6bbf4cfb-", Namespace:"calico-system", SelfLink:"", UID:"e31d705d-f18a-4056-9103-cd026858c652", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b6bbf4cfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef", Pod:"whisker-6b6bbf4cfb-7ddpj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali35c291d975a", MAC:"7a:43:1e:40:52:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:09:57.678848 containerd[1474]: 2025-09-13 00:09:57.664 [INFO][4298] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef" Namespace="calico-system" Pod="whisker-6b6bbf4cfb-7ddpj" WorkloadEndpoint="localhost-k8s-whisker--6b6bbf4cfb--7ddpj-eth0" Sep 13 00:09:57.685959 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:09:57.695872 containerd[1474]: time="2025-09-13T00:09:57.695788981Z" level=info msg="CreateContainer within sandbox \"d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b0256ec85114241bb3e1355d5bcfeaf8d922572c7806e294a761a43193111d45\"" Sep 13 00:09:57.699906 containerd[1474]: time="2025-09-13T00:09:57.698582012Z" level=info msg="StartContainer for \"b0256ec85114241bb3e1355d5bcfeaf8d922572c7806e294a761a43193111d45\"" Sep 13 00:09:57.709625 kubelet[2553]: I0913 00:09:57.709565 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:09:57.710123 kubelet[2553]: E0913 00:09:57.710094 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:57.770218 containerd[1474]: time="2025-09-13T00:09:57.767218424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:57.770218 containerd[1474]: time="2025-09-13T00:09:57.767344458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:57.770218 containerd[1474]: time="2025-09-13T00:09:57.767364567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:57.770218 containerd[1474]: time="2025-09-13T00:09:57.767463939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:57.779067 systemd[1]: Started cri-containerd-b0256ec85114241bb3e1355d5bcfeaf8d922572c7806e294a761a43193111d45.scope - libcontainer container b0256ec85114241bb3e1355d5bcfeaf8d922572c7806e294a761a43193111d45. Sep 13 00:09:57.788636 containerd[1474]: time="2025-09-13T00:09:57.788549303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gg29f,Uid:b17de5d2-115f-45db-a278-f3d4d17bee79,Namespace:calico-system,Attempt:1,} returns sandbox id \"4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0\"" Sep 13 00:09:57.823268 systemd[1]: Started cri-containerd-f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef.scope - libcontainer container f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef. Sep 13 00:09:57.834485 containerd[1474]: time="2025-09-13T00:09:57.834430497Z" level=info msg="StartContainer for \"b0256ec85114241bb3e1355d5bcfeaf8d922572c7806e294a761a43193111d45\" returns successfully" Sep 13 00:09:57.847686 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:09:57.865359 systemd-networkd[1398]: cali9a21bafa11a: Link UP Sep 13 00:09:57.867213 systemd-networkd[1398]: cali9a21bafa11a: Gained carrier Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.666 [INFO][4529] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.694 [INFO][4529] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0 calico-kube-controllers-7854f6d79d- calico-system bda40bd6-a908-46e8-9d69-02dbc2713a08 1007 0 2025-09-13 00:09:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7854f6d79d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7854f6d79d-nbc6q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9a21bafa11a [] [] }} ContainerID="77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" Namespace="calico-system" Pod="calico-kube-controllers-7854f6d79d-nbc6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-" Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.694 [INFO][4529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" Namespace="calico-system" Pod="calico-kube-controllers-7854f6d79d-nbc6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.775 [INFO][4585] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" HandleID="k8s-pod-network.77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" Workload="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.775 [INFO][4585] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" HandleID="k8s-pod-network.77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" Workload="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139800), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7854f6d79d-nbc6q", "timestamp":"2025-09-13 00:09:57.775206537 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.775 [INFO][4585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.776 [INFO][4585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.776 [INFO][4585] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.785 [INFO][4585] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" host="localhost" Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.822 [INFO][4585] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.829 [INFO][4585] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.832 [INFO][4585] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.836 [INFO][4585] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.836 [INFO][4585] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" host="localhost" Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.839 [INFO][4585] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7 Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.847 [INFO][4585] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" host="localhost" Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.854 [INFO][4585] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" host="localhost" Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.855 [INFO][4585] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" host="localhost" Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.855 [INFO][4585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:09:57.892074 containerd[1474]: 2025-09-13 00:09:57.855 [INFO][4585] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" HandleID="k8s-pod-network.77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" Workload="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:09:57.893320 containerd[1474]: 2025-09-13 00:09:57.860 [INFO][4529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" Namespace="calico-system" Pod="calico-kube-controllers-7854f6d79d-nbc6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0", GenerateName:"calico-kube-controllers-7854f6d79d-", Namespace:"calico-system", SelfLink:"", UID:"bda40bd6-a908-46e8-9d69-02dbc2713a08", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7854f6d79d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7854f6d79d-nbc6q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9a21bafa11a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:09:57.893320 containerd[1474]: 2025-09-13 00:09:57.860 [INFO][4529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" Namespace="calico-system" Pod="calico-kube-controllers-7854f6d79d-nbc6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:09:57.893320 containerd[1474]: 2025-09-13 00:09:57.860 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a21bafa11a ContainerID="77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" Namespace="calico-system" Pod="calico-kube-controllers-7854f6d79d-nbc6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:09:57.893320 containerd[1474]: 2025-09-13 00:09:57.868 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" Namespace="calico-system" Pod="calico-kube-controllers-7854f6d79d-nbc6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:09:57.893320 containerd[1474]: 2025-09-13 00:09:57.869 [INFO][4529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" Namespace="calico-system" Pod="calico-kube-controllers-7854f6d79d-nbc6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0", GenerateName:"calico-kube-controllers-7854f6d79d-", Namespace:"calico-system", SelfLink:"", UID:"bda40bd6-a908-46e8-9d69-02dbc2713a08", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7854f6d79d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7", Pod:"calico-kube-controllers-7854f6d79d-nbc6q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9a21bafa11a", MAC:"b6:19:57:37:74:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:09:57.893320 containerd[1474]: 2025-09-13 00:09:57.887 [INFO][4529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7" Namespace="calico-system" Pod="calico-kube-controllers-7854f6d79d-nbc6q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:09:57.902972 containerd[1474]: time="2025-09-13T00:09:57.902809992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b6bbf4cfb-7ddpj,Uid:e31d705d-f18a-4056-9103-cd026858c652,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef\"" Sep 13 00:09:57.920862 containerd[1474]: time="2025-09-13T00:09:57.920313427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:57.920862 containerd[1474]: time="2025-09-13T00:09:57.920464319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:57.920862 containerd[1474]: time="2025-09-13T00:09:57.920483476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:57.920862 containerd[1474]: time="2025-09-13T00:09:57.920720345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:57.949213 systemd[1]: Started cri-containerd-77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7.scope - libcontainer container 77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7. Sep 13 00:09:57.961814 systemd-networkd[1398]: cali8d17b75e1c9: Link UP Sep 13 00:09:57.962961 systemd-networkd[1398]: cali8d17b75e1c9: Gained carrier Sep 13 00:09:57.981341 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.687 [INFO][4548] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.704 [INFO][4548] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--t8tr4-eth0 csi-node-driver- calico-system 2983a866-de88-4651-bc70-4e6c5a764426 1008 0 2025-09-13 00:09:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-t8tr4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8d17b75e1c9 [] [] }} ContainerID="27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" Namespace="calico-system" Pod="csi-node-driver-t8tr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8tr4-" Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.704 [INFO][4548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" Namespace="calico-system" Pod="csi-node-driver-t8tr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.801 [INFO][4598] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" HandleID="k8s-pod-network.27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" Workload="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.802 [INFO][4598] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" HandleID="k8s-pod-network.27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" Workload="localhost-k8s-csi--node--driver--t8tr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-t8tr4", "timestamp":"2025-09-13 00:09:57.801239234 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.802 [INFO][4598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.855 [INFO][4598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.855 [INFO][4598] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.887 [INFO][4598] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" host="localhost" Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.922 [INFO][4598] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.928 [INFO][4598] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.931 [INFO][4598] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.934 [INFO][4598] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.934 [INFO][4598] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" host="localhost" Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.936 [INFO][4598] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.944 [INFO][4598] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" host="localhost" Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.952 [INFO][4598] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" host="localhost" Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.952 [INFO][4598] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" host="localhost" Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.952 [INFO][4598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:09:57.981552 containerd[1474]: 2025-09-13 00:09:57.952 [INFO][4598] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" HandleID="k8s-pod-network.27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" Workload="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:09:57.982140 containerd[1474]: 2025-09-13 00:09:57.958 [INFO][4548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" Namespace="calico-system" Pod="csi-node-driver-t8tr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8tr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t8tr4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2983a866-de88-4651-bc70-4e6c5a764426", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-t8tr4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d17b75e1c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:09:57.982140 containerd[1474]: 2025-09-13 00:09:57.958 [INFO][4548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" Namespace="calico-system" Pod="csi-node-driver-t8tr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:09:57.982140 containerd[1474]: 2025-09-13 00:09:57.959 [INFO][4548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d17b75e1c9 ContainerID="27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" Namespace="calico-system" Pod="csi-node-driver-t8tr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:09:57.982140 containerd[1474]: 2025-09-13 00:09:57.964 [INFO][4548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" Namespace="calico-system" Pod="csi-node-driver-t8tr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:09:57.982140 containerd[1474]: 2025-09-13 00:09:57.964 [INFO][4548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" Namespace="calico-system" Pod="csi-node-driver-t8tr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8tr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t8tr4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2983a866-de88-4651-bc70-4e6c5a764426", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f", Pod:"csi-node-driver-t8tr4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d17b75e1c9", MAC:"0e:e3:de:9e:68:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:09:57.982140 containerd[1474]: 2025-09-13 00:09:57.977 [INFO][4548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f" Namespace="calico-system" Pod="csi-node-driver-t8tr4" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:09:58.008650 containerd[1474]: time="2025-09-13T00:09:58.008510997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:09:58.008800 containerd[1474]: time="2025-09-13T00:09:58.008689823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:09:58.008800 containerd[1474]: time="2025-09-13T00:09:58.008764787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:58.009007 containerd[1474]: time="2025-09-13T00:09:58.008945026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:09:58.029094 containerd[1474]: time="2025-09-13T00:09:58.028919354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7854f6d79d-nbc6q,Uid:bda40bd6-a908-46e8-9d69-02dbc2713a08,Namespace:calico-system,Attempt:1,} returns sandbox id \"77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7\"" Sep 13 00:09:58.035205 systemd[1]: Started cri-containerd-27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f.scope - libcontainer container 27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f. Sep 13 00:09:58.050891 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:09:58.065162 containerd[1474]: time="2025-09-13T00:09:58.065103563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8tr4,Uid:2983a866-de88-4651-bc70-4e6c5a764426,Namespace:calico-system,Attempt:1,} returns sandbox id \"27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f\"" Sep 13 00:09:58.235163 systemd[1]: run-netns-cni\x2dec3e1c8f\x2d15ed\x2d5057\x2d67c8\x2d3aa2939170d7.mount: Deactivated successfully. Sep 13 00:09:58.235327 systemd[1]: run-netns-cni\x2d731ab2bd\x2d2984\x2da7b0\x2d6946\x2d4b2fc9607a1e.mount: Deactivated successfully. Sep 13 00:09:58.665212 systemd-networkd[1398]: cali35c291d975a: Gained IPv6LL Sep 13 00:09:58.666118 systemd-networkd[1398]: calid054275f013: Gained IPv6LL Sep 13 00:09:58.702219 kubelet[2553]: E0913 00:09:58.701383 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:58.710928 kubelet[2553]: E0913 00:09:58.710892 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:09:58.719686 kubelet[2553]: I0913 00:09:58.719582 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-q44l7" podStartSLOduration=44.719557508 podStartE2EDuration="44.719557508s" podCreationTimestamp="2025-09-13 00:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:09:58.7191502 +0000 UTC m=+49.731311621" watchObservedRunningTime="2025-09-13 00:09:58.719557508 +0000 UTC m=+49.731718919" Sep 13 00:09:58.921254 systemd-networkd[1398]: cali92b8ab1575d: Gained IPv6LL Sep 13 00:09:59.049204 systemd-networkd[1398]: cali16f1f4fff48: Gained IPv6LL Sep 13 00:09:59.222065 kernel: bpftool[4859]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:09:59.242149 systemd-networkd[1398]: calidbaf68abd87: Gained IPv6LL Sep 13 00:09:59.562143 systemd-networkd[1398]: cali8d17b75e1c9: Gained IPv6LL Sep 13 00:09:59.562592 systemd-networkd[1398]: cali9a21bafa11a: Gained IPv6LL Sep 13 00:09:59.623313 systemd-networkd[1398]: vxlan.calico: Link UP Sep 13 00:09:59.623325 systemd-networkd[1398]: vxlan.calico: Gained carrier Sep 13 00:09:59.791440 kubelet[2553]: E0913 00:09:59.791372 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:00.137165 containerd[1474]: time="2025-09-13T00:10:00.137069277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:00.138435 containerd[1474]: time="2025-09-13T00:10:00.138324773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:10:00.140396 containerd[1474]: time="2025-09-13T00:10:00.140238260Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:00.179846 containerd[1474]: time="2025-09-13T00:10:00.179737286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:00.180794 containerd[1474]: time="2025-09-13T00:10:00.180746977Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.747021616s" Sep 13 00:10:00.180885 containerd[1474]: time="2025-09-13T00:10:00.180791022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:10:00.182584 containerd[1474]: time="2025-09-13T00:10:00.182467681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:10:00.183741 containerd[1474]: time="2025-09-13T00:10:00.183671939Z" level=info msg="CreateContainer within sandbox \"f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:10:00.376728 containerd[1474]: time="2025-09-13T00:10:00.376632127Z" level=info msg="CreateContainer within sandbox \"f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ec269329d114e0189c6f467c5717807dd20634935438b143fd089f004a9fb7d4\"" Sep 13 00:10:00.377850 containerd[1474]: time="2025-09-13T00:10:00.377795435Z" level=info msg="StartContainer for \"ec269329d114e0189c6f467c5717807dd20634935438b143fd089f004a9fb7d4\"" Sep 13 00:10:00.513363 systemd[1]: Started cri-containerd-ec269329d114e0189c6f467c5717807dd20634935438b143fd089f004a9fb7d4.scope - libcontainer container ec269329d114e0189c6f467c5717807dd20634935438b143fd089f004a9fb7d4. Sep 13 00:10:00.583925 containerd[1474]: time="2025-09-13T00:10:00.583857378Z" level=info msg="StartContainer for \"ec269329d114e0189c6f467c5717807dd20634935438b143fd089f004a9fb7d4\" returns successfully" Sep 13 00:10:00.795687 kubelet[2553]: E0913 00:10:00.795294 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:00.816642 kubelet[2553]: I0913 00:10:00.816552 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76f56b585c-zr49b" podStartSLOduration=30.067445961 podStartE2EDuration="32.81652739s" podCreationTimestamp="2025-09-13 00:09:28 +0000 UTC" firstStartedPulling="2025-09-13 00:09:57.433071685 +0000 UTC m=+48.445233086" lastFinishedPulling="2025-09-13 00:10:00.182153094 +0000 UTC m=+51.194314515" observedRunningTime="2025-09-13 00:10:00.816191752 +0000 UTC m=+51.828353183" watchObservedRunningTime="2025-09-13 00:10:00.81652739 +0000 UTC m=+51.828688801" Sep 13 00:10:00.905891 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL Sep 13 00:10:00.997513 containerd[1474]: time="2025-09-13T00:10:00.997399944Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:01.002732 containerd[1474]: time="2025-09-13T00:10:01.000817107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:10:01.003869 containerd[1474]: time="2025-09-13T00:10:01.003818211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 821.302918ms" Sep 13 00:10:01.003962 containerd[1474]: time="2025-09-13T00:10:01.003870342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:10:01.006193 containerd[1474]: time="2025-09-13T00:10:01.006119113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:10:01.009962 containerd[1474]: time="2025-09-13T00:10:01.009877068Z" level=info msg="CreateContainer within sandbox \"98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:10:01.037485 containerd[1474]: time="2025-09-13T00:10:01.037418939Z" level=info msg="CreateContainer within sandbox \"98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2928349c52a1a3286599d844d10526674150d423b53f26d5c664e8dfe9a111b7\"" Sep 13 00:10:01.039742 containerd[1474]: time="2025-09-13T00:10:01.039708318Z" level=info msg="StartContainer for \"2928349c52a1a3286599d844d10526674150d423b53f26d5c664e8dfe9a111b7\"" Sep 13 00:10:01.081364 systemd[1]: Started cri-containerd-2928349c52a1a3286599d844d10526674150d423b53f26d5c664e8dfe9a111b7.scope - libcontainer container 2928349c52a1a3286599d844d10526674150d423b53f26d5c664e8dfe9a111b7. Sep 13 00:10:01.138666 containerd[1474]: time="2025-09-13T00:10:01.138601759Z" level=info msg="StartContainer for \"2928349c52a1a3286599d844d10526674150d423b53f26d5c664e8dfe9a111b7\" returns successfully" Sep 13 00:10:01.999754 kubelet[2553]: I0913 00:10:01.999422 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76f56b585c-gwf2q" podStartSLOduration=30.59600937 podStartE2EDuration="33.999395118s" podCreationTimestamp="2025-09-13 00:09:28 +0000 UTC" firstStartedPulling="2025-09-13 00:09:57.60233688 +0000 UTC m=+48.614498291" lastFinishedPulling="2025-09-13 00:10:01.005722628 +0000 UTC m=+52.017884039" observedRunningTime="2025-09-13 00:10:01.8453009 +0000 UTC m=+52.857462302" watchObservedRunningTime="2025-09-13 00:10:01.999395118 +0000 UTC m=+53.011556529" Sep 13 00:10:02.008963 systemd[1]: Started sshd@10-10.0.0.89:22-10.0.0.1:38902.service - OpenSSH per-connection server daemon (10.0.0.1:38902). Sep 13 00:10:02.084798 sshd[5029]: Accepted publickey for core from 10.0.0.1 port 38902 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:02.086979 sshd[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:02.092761 systemd-logind[1449]: New session 11 of user core. Sep 13 00:10:02.103044 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:10:02.282650 sshd[5029]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:02.288950 systemd[1]: sshd@10-10.0.0.89:22-10.0.0.1:38902.service: Deactivated successfully. Sep 13 00:10:02.292466 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:10:02.294023 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:10:02.295986 systemd-logind[1449]: Removed session 11. Sep 13 00:10:02.806302 kubelet[2553]: I0913 00:10:02.806250 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:10:04.808011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2428208144.mount: Deactivated successfully. Sep 13 00:10:05.409844 containerd[1474]: time="2025-09-13T00:10:05.409772285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:05.411289 containerd[1474]: time="2025-09-13T00:10:05.411228678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:10:05.416218 containerd[1474]: time="2025-09-13T00:10:05.416114844Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:05.424872 containerd[1474]: time="2025-09-13T00:10:05.424794721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:05.426824 containerd[1474]: time="2025-09-13T00:10:05.426626116Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.420435365s" Sep 13 00:10:05.426824 containerd[1474]: time="2025-09-13T00:10:05.426689378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:10:05.430560 containerd[1474]: time="2025-09-13T00:10:05.428946853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:10:05.430560 containerd[1474]: time="2025-09-13T00:10:05.430383288Z" level=info msg="CreateContainer within sandbox \"4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:10:05.461968 containerd[1474]: time="2025-09-13T00:10:05.461884362Z" level=info msg="CreateContainer within sandbox \"4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"73c009b16e3d9af19476b88002ea52ba7a765c6c3651abde167bdf6fcda7ea06\"" Sep 13 00:10:05.462685 containerd[1474]: time="2025-09-13T00:10:05.462652812Z" level=info msg="StartContainer for \"73c009b16e3d9af19476b88002ea52ba7a765c6c3651abde167bdf6fcda7ea06\"" Sep 13 00:10:05.510471 systemd[1]: Started cri-containerd-73c009b16e3d9af19476b88002ea52ba7a765c6c3651abde167bdf6fcda7ea06.scope - libcontainer container 73c009b16e3d9af19476b88002ea52ba7a765c6c3651abde167bdf6fcda7ea06. Sep 13 00:10:05.583431 containerd[1474]: time="2025-09-13T00:10:05.583361256Z" level=info msg="StartContainer for \"73c009b16e3d9af19476b88002ea52ba7a765c6c3651abde167bdf6fcda7ea06\" returns successfully" Sep 13 00:10:05.831927 kubelet[2553]: I0913 00:10:05.831708 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-gg29f" podStartSLOduration=28.196604426 podStartE2EDuration="35.831681783s" podCreationTimestamp="2025-09-13 00:09:30 +0000 UTC" firstStartedPulling="2025-09-13 00:09:57.793581801 +0000 UTC m=+48.805743213" lastFinishedPulling="2025-09-13 00:10:05.428659159 +0000 UTC m=+56.440820570" observedRunningTime="2025-09-13 00:10:05.831079253 +0000 UTC m=+56.843240694" watchObservedRunningTime="2025-09-13 00:10:05.831681783 +0000 UTC m=+56.843843194" Sep 13 00:10:07.298624 systemd[1]: Started sshd@11-10.0.0.89:22-10.0.0.1:38936.service - OpenSSH per-connection server daemon (10.0.0.1:38936). Sep 13 00:10:07.582279 sshd[5152]: Accepted publickey for core from 10.0.0.1 port 38936 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:07.585332 sshd[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:07.592221 systemd-logind[1449]: New session 12 of user core. Sep 13 00:10:07.597601 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:10:07.932432 sshd[5152]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:07.937237 systemd[1]: sshd@11-10.0.0.89:22-10.0.0.1:38936.service: Deactivated successfully. Sep 13 00:10:07.939743 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:10:07.940679 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:10:07.941729 systemd-logind[1449]: Removed session 12. Sep 13 00:10:09.074418 containerd[1474]: time="2025-09-13T00:10:09.074367012Z" level=info msg="StopPodSandbox for \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\"" Sep 13 00:10:09.085964 containerd[1474]: time="2025-09-13T00:10:09.085789932Z" level=info msg="StopPodSandbox for \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\"" Sep 13 00:10:09.943021 containerd[1474]: 2025-09-13 00:10:09.456 [WARNING][5194] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0", GenerateName:"calico-kube-controllers-7854f6d79d-", Namespace:"calico-system", SelfLink:"", UID:"bda40bd6-a908-46e8-9d69-02dbc2713a08", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7854f6d79d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7", Pod:"calico-kube-controllers-7854f6d79d-nbc6q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9a21bafa11a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:09.943021 containerd[1474]: 2025-09-13 00:10:09.457 [INFO][5194] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Sep 13 00:10:09.943021 containerd[1474]: 2025-09-13 00:10:09.457 [INFO][5194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" iface="eth0" netns="" Sep 13 00:10:09.943021 containerd[1474]: 2025-09-13 00:10:09.457 [INFO][5194] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Sep 13 00:10:09.943021 containerd[1474]: 2025-09-13 00:10:09.457 [INFO][5194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Sep 13 00:10:09.943021 containerd[1474]: 2025-09-13 00:10:09.479 [INFO][5214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" HandleID="k8s-pod-network.5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Workload="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:10:09.943021 containerd[1474]: 2025-09-13 00:10:09.479 [INFO][5214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:09.943021 containerd[1474]: 2025-09-13 00:10:09.479 [INFO][5214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:09.943021 containerd[1474]: 2025-09-13 00:10:09.740 [WARNING][5214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" HandleID="k8s-pod-network.5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Workload="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:10:09.943021 containerd[1474]: 2025-09-13 00:10:09.740 [INFO][5214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" HandleID="k8s-pod-network.5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Workload="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:10:09.943021 containerd[1474]: 2025-09-13 00:10:09.936 [INFO][5214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:09.943021 containerd[1474]: 2025-09-13 00:10:09.939 [INFO][5194] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Sep 13 00:10:09.943969 containerd[1474]: time="2025-09-13T00:10:09.943100989Z" level=info msg="TearDown network for sandbox \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\" successfully" Sep 13 00:10:09.943969 containerd[1474]: time="2025-09-13T00:10:09.943136758Z" level=info msg="StopPodSandbox for \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\" returns successfully" Sep 13 00:10:09.944059 containerd[1474]: time="2025-09-13T00:10:09.943954318Z" level=info msg="RemovePodSandbox for \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\"" Sep 13 00:10:09.946793 containerd[1474]: time="2025-09-13T00:10:09.946741969Z" level=info msg="Forcibly stopping sandbox \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\"" Sep 13 00:10:09.951045 containerd[1474]: 2025-09-13 00:10:09.875 [INFO][5193] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" Sep 13 00:10:09.951045 containerd[1474]: 2025-09-13 00:10:09.876 [INFO][5193] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" iface="eth0" netns="/var/run/netns/cni-42d71424-bb9e-ee1f-9459-7c5b53f5ce28" Sep 13 00:10:09.951045 containerd[1474]: 2025-09-13 00:10:09.876 [INFO][5193] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" iface="eth0" netns="/var/run/netns/cni-42d71424-bb9e-ee1f-9459-7c5b53f5ce28" Sep 13 00:10:09.951045 containerd[1474]: 2025-09-13 00:10:09.876 [INFO][5193] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" iface="eth0" netns="/var/run/netns/cni-42d71424-bb9e-ee1f-9459-7c5b53f5ce28" Sep 13 00:10:09.951045 containerd[1474]: 2025-09-13 00:10:09.876 [INFO][5193] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" Sep 13 00:10:09.951045 containerd[1474]: 2025-09-13 00:10:09.876 [INFO][5193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" Sep 13 00:10:09.951045 containerd[1474]: 2025-09-13 00:10:09.898 [INFO][5223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" HandleID="k8s-pod-network.27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" Workload="localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0" Sep 13 00:10:09.951045 containerd[1474]: 2025-09-13 00:10:09.898 [INFO][5223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:09.951045 containerd[1474]: 2025-09-13 00:10:09.936 [INFO][5223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:09.951045 containerd[1474]: 2025-09-13 00:10:09.943 [WARNING][5223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" HandleID="k8s-pod-network.27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" Workload="localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0" Sep 13 00:10:09.951045 containerd[1474]: 2025-09-13 00:10:09.943 [INFO][5223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" HandleID="k8s-pod-network.27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" Workload="localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0" Sep 13 00:10:09.951045 containerd[1474]: 2025-09-13 00:10:09.944 [INFO][5223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:09.951045 containerd[1474]: 2025-09-13 00:10:09.947 [INFO][5193] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e" Sep 13 00:10:09.951464 containerd[1474]: time="2025-09-13T00:10:09.951255104Z" level=info msg="TearDown network for sandbox \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\" successfully" Sep 13 00:10:09.951464 containerd[1474]: time="2025-09-13T00:10:09.951279500Z" level=info msg="StopPodSandbox for \"27c2c85a2fb9e14f13cfc948404f723dba927eea921465df0378c9a54e1c214e\" returns successfully" Sep 13 00:10:09.951711 kubelet[2553]: E0913 00:10:09.951684 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:09.952145 containerd[1474]: time="2025-09-13T00:10:09.952098452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bqltl,Uid:60e91eea-bf80-4775-825e-60892cf59ae7,Namespace:kube-system,Attempt:1,}" Sep 13 00:10:09.957540 systemd[1]: run-netns-cni\x2d42d71424\x2dbb9e\x2dee1f\x2d9459\x2d7c5b53f5ce28.mount: Deactivated successfully. Sep 13 00:10:10.561531 containerd[1474]: 2025-09-13 00:10:10.525 [WARNING][5241] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0", GenerateName:"calico-kube-controllers-7854f6d79d-", Namespace:"calico-system", SelfLink:"", UID:"bda40bd6-a908-46e8-9d69-02dbc2713a08", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7854f6d79d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7", Pod:"calico-kube-controllers-7854f6d79d-nbc6q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9a21bafa11a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:10.561531 containerd[1474]: 2025-09-13 00:10:10.525 [INFO][5241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Sep 13 00:10:10.561531 containerd[1474]: 2025-09-13 00:10:10.525 [INFO][5241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" iface="eth0" netns="" Sep 13 00:10:10.561531 containerd[1474]: 2025-09-13 00:10:10.525 [INFO][5241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Sep 13 00:10:10.561531 containerd[1474]: 2025-09-13 00:10:10.525 [INFO][5241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Sep 13 00:10:10.561531 containerd[1474]: 2025-09-13 00:10:10.548 [INFO][5258] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" HandleID="k8s-pod-network.5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Workload="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:10:10.561531 containerd[1474]: 2025-09-13 00:10:10.548 [INFO][5258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:10.561531 containerd[1474]: 2025-09-13 00:10:10.548 [INFO][5258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:10.561531 containerd[1474]: 2025-09-13 00:10:10.554 [WARNING][5258] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" HandleID="k8s-pod-network.5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Workload="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:10:10.561531 containerd[1474]: 2025-09-13 00:10:10.554 [INFO][5258] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" HandleID="k8s-pod-network.5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Workload="localhost-k8s-calico--kube--controllers--7854f6d79d--nbc6q-eth0" Sep 13 00:10:10.561531 containerd[1474]: 2025-09-13 00:10:10.555 [INFO][5258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:10.561531 containerd[1474]: 2025-09-13 00:10:10.558 [INFO][5241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b" Sep 13 00:10:10.562309 containerd[1474]: time="2025-09-13T00:10:10.561579034Z" level=info msg="TearDown network for sandbox \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\" successfully" Sep 13 00:10:10.968472 containerd[1474]: time="2025-09-13T00:10:10.968388073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:11.050208 containerd[1474]: time="2025-09-13T00:10:11.050087370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:10:11.050398 containerd[1474]: time="2025-09-13T00:10:11.050229603Z" level=info msg="RemovePodSandbox \"5df66b2a446a5ff75d9b4cabce6f3e2164111255a95b2d4cd7445d880ffd146b\" returns successfully" Sep 13 00:10:11.050946 containerd[1474]: time="2025-09-13T00:10:11.050916571Z" level=info msg="StopPodSandbox for \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\"" Sep 13 00:10:11.286597 containerd[1474]: time="2025-09-13T00:10:11.286407828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:10:11.830637 containerd[1474]: time="2025-09-13T00:10:11.830574837Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:12.034348 containerd[1474]: 2025-09-13 00:10:11.290 [WARNING][5276] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" WorkloadEndpoint="localhost-k8s-whisker--759dcc9f77--fl2jj-eth0" Sep 13 00:10:12.034348 containerd[1474]: 2025-09-13 00:10:11.290 [INFO][5276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Sep 13 00:10:12.034348 containerd[1474]: 2025-09-13 00:10:11.290 [INFO][5276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" iface="eth0" netns="" Sep 13 00:10:12.034348 containerd[1474]: 2025-09-13 00:10:11.290 [INFO][5276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Sep 13 00:10:12.034348 containerd[1474]: 2025-09-13 00:10:11.291 [INFO][5276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Sep 13 00:10:12.034348 containerd[1474]: 2025-09-13 00:10:11.315 [INFO][5284] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" HandleID="k8s-pod-network.a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Workload="localhost-k8s-whisker--759dcc9f77--fl2jj-eth0" Sep 13 00:10:12.034348 containerd[1474]: 2025-09-13 00:10:11.315 [INFO][5284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:12.034348 containerd[1474]: 2025-09-13 00:10:11.503 [INFO][5284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:12.034348 containerd[1474]: 2025-09-13 00:10:11.963 [WARNING][5284] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" HandleID="k8s-pod-network.a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Workload="localhost-k8s-whisker--759dcc9f77--fl2jj-eth0" Sep 13 00:10:12.034348 containerd[1474]: 2025-09-13 00:10:11.964 [INFO][5284] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" HandleID="k8s-pod-network.a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Workload="localhost-k8s-whisker--759dcc9f77--fl2jj-eth0" Sep 13 00:10:12.034348 containerd[1474]: 2025-09-13 00:10:12.029 [INFO][5284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:12.034348 containerd[1474]: 2025-09-13 00:10:12.031 [INFO][5276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Sep 13 00:10:12.034775 containerd[1474]: time="2025-09-13T00:10:12.034406881Z" level=info msg="TearDown network for sandbox \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\" successfully" Sep 13 00:10:12.034775 containerd[1474]: time="2025-09-13T00:10:12.034458870Z" level=info msg="StopPodSandbox for \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\" returns successfully" Sep 13 00:10:12.035129 containerd[1474]: time="2025-09-13T00:10:12.035090991Z" level=info msg="RemovePodSandbox for \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\"" Sep 13 00:10:12.035184 containerd[1474]: time="2025-09-13T00:10:12.035150536Z" level=info msg="Forcibly stopping sandbox \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\"" Sep 13 00:10:12.177328 containerd[1474]: time="2025-09-13T00:10:12.177267172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:12.178171 containerd[1474]: time="2025-09-13T00:10:12.178132681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 6.749136862s" Sep 13 00:10:12.178274 containerd[1474]: time="2025-09-13T00:10:12.178172186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:10:12.179250 containerd[1474]: time="2025-09-13T00:10:12.179218531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:10:12.180252 containerd[1474]: time="2025-09-13T00:10:12.180216283Z" level=info msg="CreateContainer within sandbox \"f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:10:12.691598 containerd[1474]: 2025-09-13 00:10:12.644 [WARNING][5324] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" WorkloadEndpoint="localhost-k8s-whisker--759dcc9f77--fl2jj-eth0" Sep 13 00:10:12.691598 containerd[1474]: 2025-09-13 00:10:12.644 [INFO][5324] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Sep 13 00:10:12.691598 containerd[1474]: 2025-09-13 00:10:12.644 [INFO][5324] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" iface="eth0" netns="" Sep 13 00:10:12.691598 containerd[1474]: 2025-09-13 00:10:12.644 [INFO][5324] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Sep 13 00:10:12.691598 containerd[1474]: 2025-09-13 00:10:12.644 [INFO][5324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Sep 13 00:10:12.691598 containerd[1474]: 2025-09-13 00:10:12.675 [INFO][5347] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" HandleID="k8s-pod-network.a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Workload="localhost-k8s-whisker--759dcc9f77--fl2jj-eth0" Sep 13 00:10:12.691598 containerd[1474]: 2025-09-13 00:10:12.675 [INFO][5347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:12.691598 containerd[1474]: 2025-09-13 00:10:12.675 [INFO][5347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:12.691598 containerd[1474]: 2025-09-13 00:10:12.683 [WARNING][5347] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" HandleID="k8s-pod-network.a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Workload="localhost-k8s-whisker--759dcc9f77--fl2jj-eth0" Sep 13 00:10:12.691598 containerd[1474]: 2025-09-13 00:10:12.684 [INFO][5347] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" HandleID="k8s-pod-network.a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Workload="localhost-k8s-whisker--759dcc9f77--fl2jj-eth0" Sep 13 00:10:12.691598 containerd[1474]: 2025-09-13 00:10:12.685 [INFO][5347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:12.691598 containerd[1474]: 2025-09-13 00:10:12.688 [INFO][5324] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae" Sep 13 00:10:12.692287 containerd[1474]: time="2025-09-13T00:10:12.691646910Z" level=info msg="TearDown network for sandbox \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\" successfully" Sep 13 00:10:12.860440 systemd-networkd[1398]: calia94b4f1d059: Link UP Sep 13 00:10:12.862245 systemd-networkd[1398]: calia94b4f1d059: Gained carrier Sep 13 00:10:12.951688 systemd[1]: Started sshd@12-10.0.0.89:22-10.0.0.1:41534.service - OpenSSH per-connection server daemon (10.0.0.1:41534). Sep 13 00:10:13.023323 sshd[5368]: Accepted publickey for core from 10.0.0.1 port 41534 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:13.025327 sshd[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:13.041908 systemd-logind[1449]: New session 13 of user core. Sep 13 00:10:13.051376 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:10:13.067421 containerd[1474]: time="2025-09-13T00:10:13.067280173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:10:13.067421 containerd[1474]: time="2025-09-13T00:10:13.067418197Z" level=info msg="RemovePodSandbox \"a006af800c9ddc958c8073995958cb5587fd96ed4990c5628ee0a89371d173ae\" returns successfully" Sep 13 00:10:13.068314 containerd[1474]: time="2025-09-13T00:10:13.068274698Z" level=info msg="StopPodSandbox for \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\"" Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.648 [INFO][5333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0 coredns-7c65d6cfc9- kube-system 60e91eea-bf80-4775-825e-60892cf59ae7 1140 0 2025-09-13 00:09:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-bqltl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia94b4f1d059 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bqltl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bqltl-" Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.648 [INFO][5333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bqltl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0" Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.682 [INFO][5354] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" HandleID="k8s-pod-network.073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" Workload="localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0" Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.682 [INFO][5354] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" HandleID="k8s-pod-network.073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" Workload="localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000510ba0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-bqltl", "timestamp":"2025-09-13 00:10:12.682606245 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.682 [INFO][5354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.685 [INFO][5354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.685 [INFO][5354] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.694 [INFO][5354] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" host="localhost" Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.700 [INFO][5354] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.705 [INFO][5354] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.707 [INFO][5354] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.710 [INFO][5354] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.710 [INFO][5354] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" host="localhost" Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.712 [INFO][5354] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108 Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.732 [INFO][5354] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" host="localhost" Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.855 [INFO][5354] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" host="localhost" Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.855 [INFO][5354] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" host="localhost" Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.855 [INFO][5354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:13.417087 containerd[1474]: 2025-09-13 00:10:12.855 [INFO][5354] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" HandleID="k8s-pod-network.073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" Workload="localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0" Sep 13 00:10:13.519361 containerd[1474]: 2025-09-13 00:10:12.858 [INFO][5333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bqltl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"60e91eea-bf80-4775-825e-60892cf59ae7", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-bqltl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia94b4f1d059", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:13.519361 containerd[1474]: 2025-09-13 00:10:12.858 [INFO][5333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bqltl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0" Sep 13 00:10:13.519361 containerd[1474]: 2025-09-13 00:10:12.858 [INFO][5333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia94b4f1d059 ContainerID="073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bqltl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0" Sep 13 00:10:13.519361 containerd[1474]: 2025-09-13 00:10:12.860 [INFO][5333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bqltl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0" Sep 13 00:10:13.519361 containerd[1474]: 2025-09-13 00:10:12.861 [INFO][5333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bqltl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"60e91eea-bf80-4775-825e-60892cf59ae7", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108", Pod:"coredns-7c65d6cfc9-bqltl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia94b4f1d059", MAC:"7a:05:a6:88:c4:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:13.519361 containerd[1474]: 2025-09-13 00:10:13.414 [INFO][5333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bqltl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bqltl-eth0" Sep 13 00:10:13.577314 containerd[1474]: 2025-09-13 00:10:13.532 [WARNING][5381] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a4e7348d-2cc7-4002-b829-c570ed8d1df0", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f", Pod:"coredns-7c65d6cfc9-q44l7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid054275f013", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:13.577314 containerd[1474]: 2025-09-13 00:10:13.532 [INFO][5381] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Sep 13 00:10:13.577314 containerd[1474]: 2025-09-13 00:10:13.532 [INFO][5381] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" iface="eth0" netns="" Sep 13 00:10:13.577314 containerd[1474]: 2025-09-13 00:10:13.532 [INFO][5381] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Sep 13 00:10:13.577314 containerd[1474]: 2025-09-13 00:10:13.532 [INFO][5381] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Sep 13 00:10:13.577314 containerd[1474]: 2025-09-13 00:10:13.560 [INFO][5407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" HandleID="k8s-pod-network.692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Workload="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:10:13.577314 containerd[1474]: 2025-09-13 00:10:13.560 [INFO][5407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:13.577314 containerd[1474]: 2025-09-13 00:10:13.560 [INFO][5407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:13.577314 containerd[1474]: 2025-09-13 00:10:13.567 [WARNING][5407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" HandleID="k8s-pod-network.692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Workload="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:10:13.577314 containerd[1474]: 2025-09-13 00:10:13.567 [INFO][5407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" HandleID="k8s-pod-network.692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Workload="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:10:13.577314 containerd[1474]: 2025-09-13 00:10:13.569 [INFO][5407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:13.577314 containerd[1474]: 2025-09-13 00:10:13.573 [INFO][5381] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Sep 13 00:10:13.577775 containerd[1474]: time="2025-09-13T00:10:13.577356586Z" level=info msg="TearDown network for sandbox \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\" successfully" Sep 13 00:10:13.577775 containerd[1474]: time="2025-09-13T00:10:13.577390311Z" level=info msg="StopPodSandbox for \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\" returns successfully" Sep 13 00:10:13.578468 containerd[1474]: time="2025-09-13T00:10:13.578059253Z" level=info msg="RemovePodSandbox for \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\"" Sep 13 00:10:13.578468 containerd[1474]: time="2025-09-13T00:10:13.578103958Z" level=info msg="Forcibly stopping sandbox \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\"" Sep 13 00:10:13.780867 sshd[5368]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:13.787152 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:10:13.787863 systemd[1]: sshd@12-10.0.0.89:22-10.0.0.1:41534.service: Deactivated successfully. Sep 13 00:10:13.791788 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:10:13.794122 systemd-logind[1449]: Removed session 13. Sep 13 00:10:13.805116 containerd[1474]: 2025-09-13 00:10:13.765 [WARNING][5425] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a4e7348d-2cc7-4002-b829-c570ed8d1df0", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d114e92b551fd6ef868afe41078b8a08f40aa106b00cde53626ee4daffd0979f", Pod:"coredns-7c65d6cfc9-q44l7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid054275f013", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:13.805116 containerd[1474]: 2025-09-13 00:10:13.765 [INFO][5425] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Sep 13 00:10:13.805116 containerd[1474]: 2025-09-13 00:10:13.765 [INFO][5425] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" iface="eth0" netns="" Sep 13 00:10:13.805116 containerd[1474]: 2025-09-13 00:10:13.765 [INFO][5425] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Sep 13 00:10:13.805116 containerd[1474]: 2025-09-13 00:10:13.765 [INFO][5425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Sep 13 00:10:13.805116 containerd[1474]: 2025-09-13 00:10:13.788 [INFO][5434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" HandleID="k8s-pod-network.692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Workload="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:10:13.805116 containerd[1474]: 2025-09-13 00:10:13.788 [INFO][5434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:13.805116 containerd[1474]: 2025-09-13 00:10:13.788 [INFO][5434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:13.805116 containerd[1474]: 2025-09-13 00:10:13.796 [WARNING][5434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" HandleID="k8s-pod-network.692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Workload="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:10:13.805116 containerd[1474]: 2025-09-13 00:10:13.796 [INFO][5434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" HandleID="k8s-pod-network.692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Workload="localhost-k8s-coredns--7c65d6cfc9--q44l7-eth0" Sep 13 00:10:13.805116 containerd[1474]: 2025-09-13 00:10:13.797 [INFO][5434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:13.805116 containerd[1474]: 2025-09-13 00:10:13.801 [INFO][5425] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a" Sep 13 00:10:13.806499 containerd[1474]: time="2025-09-13T00:10:13.805171886Z" level=info msg="TearDown network for sandbox \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\" successfully" Sep 13 00:10:13.986912 containerd[1474]: time="2025-09-13T00:10:13.986449309Z" level=info msg="CreateContainer within sandbox \"f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"446209c00bc649cc62b892ceb4996ac8f6742045c645c0c625153f8462c178c3\"" Sep 13 00:10:13.987748 containerd[1474]: time="2025-09-13T00:10:13.987687250Z" level=info msg="StartContainer for \"446209c00bc649cc62b892ceb4996ac8f6742045c645c0c625153f8462c178c3\"" Sep 13 00:10:14.005763 containerd[1474]: time="2025-09-13T00:10:14.005681541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:10:14.005920 containerd[1474]: time="2025-09-13T00:10:14.005778367Z" level=info msg="RemovePodSandbox \"692b91d1ed88bffc2349e1012eedd8a0ded4df87d894ce5c17d837eff561dd6a\" returns successfully" Sep 13 00:10:14.006496 containerd[1474]: time="2025-09-13T00:10:14.006449521Z" level=info msg="StopPodSandbox for \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\"" Sep 13 00:10:14.025337 containerd[1474]: time="2025-09-13T00:10:14.024944216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:14.025694 containerd[1474]: time="2025-09-13T00:10:14.025548183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:14.026098 containerd[1474]: time="2025-09-13T00:10:14.026040535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:14.027478 containerd[1474]: time="2025-09-13T00:10:14.027341025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:14.058628 systemd[1]: run-containerd-runc-k8s.io-073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108-runc.B5wu14.mount: Deactivated successfully. Sep 13 00:10:14.077146 systemd[1]: Started cri-containerd-073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108.scope - libcontainer container 073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108. Sep 13 00:10:14.078850 systemd[1]: Started cri-containerd-446209c00bc649cc62b892ceb4996ac8f6742045c645c0c625153f8462c178c3.scope - libcontainer container 446209c00bc649cc62b892ceb4996ac8f6742045c645c0c625153f8462c178c3. Sep 13 00:10:14.096507 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:10:14.212140 containerd[1474]: 2025-09-13 00:10:14.166 [WARNING][5470] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--gg29f-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b17de5d2-115f-45db-a278-f3d4d17bee79", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0", Pod:"goldmane-7988f88666-gg29f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidbaf68abd87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:14.212140 containerd[1474]: 2025-09-13 00:10:14.167 [INFO][5470] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Sep 13 00:10:14.212140 containerd[1474]: 2025-09-13 00:10:14.167 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" iface="eth0" netns="" Sep 13 00:10:14.212140 containerd[1474]: 2025-09-13 00:10:14.167 [INFO][5470] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Sep 13 00:10:14.212140 containerd[1474]: 2025-09-13 00:10:14.167 [INFO][5470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Sep 13 00:10:14.212140 containerd[1474]: 2025-09-13 00:10:14.193 [INFO][5537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" HandleID="k8s-pod-network.a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Workload="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:10:14.212140 containerd[1474]: 2025-09-13 00:10:14.193 [INFO][5537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:14.212140 containerd[1474]: 2025-09-13 00:10:14.193 [INFO][5537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:14.212140 containerd[1474]: 2025-09-13 00:10:14.201 [WARNING][5537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" HandleID="k8s-pod-network.a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Workload="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:10:14.212140 containerd[1474]: 2025-09-13 00:10:14.201 [INFO][5537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" HandleID="k8s-pod-network.a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Workload="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:10:14.212140 containerd[1474]: 2025-09-13 00:10:14.204 [INFO][5537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:14.212140 containerd[1474]: 2025-09-13 00:10:14.208 [INFO][5470] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Sep 13 00:10:14.213388 containerd[1474]: time="2025-09-13T00:10:14.212228963Z" level=info msg="TearDown network for sandbox \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\" successfully" Sep 13 00:10:14.213388 containerd[1474]: time="2025-09-13T00:10:14.212262268Z" level=info msg="StopPodSandbox for \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\" returns successfully" Sep 13 00:10:14.213388 containerd[1474]: time="2025-09-13T00:10:14.212892523Z" level=info msg="RemovePodSandbox for \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\"" Sep 13 00:10:14.213388 containerd[1474]: time="2025-09-13T00:10:14.212935406Z" level=info msg="Forcibly stopping sandbox \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\"" Sep 13 00:10:14.293625 containerd[1474]: 2025-09-13 00:10:14.252 [WARNING][5555] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--gg29f-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b17de5d2-115f-45db-a278-f3d4d17bee79", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fae3d4f357181ac8a4eff21c60e2c58c82baba9ed34bba9c83e753edf93f1e0", Pod:"goldmane-7988f88666-gg29f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidbaf68abd87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:14.293625 containerd[1474]: 2025-09-13 00:10:14.252 [INFO][5555] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Sep 13 00:10:14.293625 containerd[1474]: 2025-09-13 00:10:14.252 [INFO][5555] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" iface="eth0" netns="" Sep 13 00:10:14.293625 containerd[1474]: 2025-09-13 00:10:14.252 [INFO][5555] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Sep 13 00:10:14.293625 containerd[1474]: 2025-09-13 00:10:14.252 [INFO][5555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Sep 13 00:10:14.293625 containerd[1474]: 2025-09-13 00:10:14.279 [INFO][5564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" HandleID="k8s-pod-network.a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Workload="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:10:14.293625 containerd[1474]: 2025-09-13 00:10:14.279 [INFO][5564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:14.293625 containerd[1474]: 2025-09-13 00:10:14.279 [INFO][5564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:14.293625 containerd[1474]: 2025-09-13 00:10:14.286 [WARNING][5564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" HandleID="k8s-pod-network.a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Workload="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:10:14.293625 containerd[1474]: 2025-09-13 00:10:14.286 [INFO][5564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" HandleID="k8s-pod-network.a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Workload="localhost-k8s-goldmane--7988f88666--gg29f-eth0" Sep 13 00:10:14.293625 containerd[1474]: 2025-09-13 00:10:14.288 [INFO][5564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:14.293625 containerd[1474]: 2025-09-13 00:10:14.290 [INFO][5555] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1" Sep 13 00:10:14.294320 containerd[1474]: time="2025-09-13T00:10:14.293676593Z" level=info msg="TearDown network for sandbox \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\" successfully" Sep 13 00:10:14.537240 systemd-networkd[1398]: calia94b4f1d059: Gained IPv6LL Sep 13 00:10:14.829911 containerd[1474]: time="2025-09-13T00:10:14.829762284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bqltl,Uid:60e91eea-bf80-4775-825e-60892cf59ae7,Namespace:kube-system,Attempt:1,} returns sandbox id \"073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108\"" Sep 13 00:10:14.829911 containerd[1474]: time="2025-09-13T00:10:14.829798293Z" level=info msg="StartContainer for \"446209c00bc649cc62b892ceb4996ac8f6742045c645c0c625153f8462c178c3\" returns successfully" Sep 13 00:10:14.831264 kubelet[2553]: E0913 00:10:14.831021 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:14.834238 containerd[1474]: time="2025-09-13T00:10:14.834147459Z" level=info msg="CreateContainer within sandbox \"073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:10:14.840177 containerd[1474]: time="2025-09-13T00:10:14.840051541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:10:14.840177 containerd[1474]: time="2025-09-13T00:10:14.840117988Z" level=info msg="RemovePodSandbox \"a14ac9b2709e615a7702b2d27310fbc593895736a32b13afaaf50681160d9da1\" returns successfully" Sep 13 00:10:14.840675 containerd[1474]: time="2025-09-13T00:10:14.840640197Z" level=info msg="StopPodSandbox for \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\"" Sep 13 00:10:14.883259 containerd[1474]: time="2025-09-13T00:10:14.882898405Z" level=info msg="CreateContainer within sandbox \"073aba0c544225ab9b4e94d59a3af9cf68b28d10453671c7313abf61503b6108\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2230acccdc4e2bc0f5f71556518ce1906e8b2388bb489ef4fcbdf7888bffa2b\"" Sep 13 00:10:14.884869 containerd[1474]: time="2025-09-13T00:10:14.883794871Z" level=info msg="StartContainer for \"d2230acccdc4e2bc0f5f71556518ce1906e8b2388bb489ef4fcbdf7888bffa2b\"" Sep 13 00:10:14.924341 systemd[1]: Started cri-containerd-d2230acccdc4e2bc0f5f71556518ce1906e8b2388bb489ef4fcbdf7888bffa2b.scope - libcontainer container d2230acccdc4e2bc0f5f71556518ce1906e8b2388bb489ef4fcbdf7888bffa2b. Sep 13 00:10:14.941155 containerd[1474]: 2025-09-13 00:10:14.888 [WARNING][5589] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0", GenerateName:"calico-apiserver-76f56b585c-", Namespace:"calico-apiserver", SelfLink:"", UID:"925826ed-f268-4368-bf97-95adf0976969", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f56b585c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21", Pod:"calico-apiserver-76f56b585c-zr49b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16f1f4fff48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:14.941155 containerd[1474]: 2025-09-13 00:10:14.888 [INFO][5589] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Sep 13 00:10:14.941155 containerd[1474]: 2025-09-13 00:10:14.888 [INFO][5589] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" iface="eth0" netns="" Sep 13 00:10:14.941155 containerd[1474]: 2025-09-13 00:10:14.888 [INFO][5589] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Sep 13 00:10:14.941155 containerd[1474]: 2025-09-13 00:10:14.888 [INFO][5589] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Sep 13 00:10:14.941155 containerd[1474]: 2025-09-13 00:10:14.922 [INFO][5598] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" HandleID="k8s-pod-network.a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Workload="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:10:14.941155 containerd[1474]: 2025-09-13 00:10:14.922 [INFO][5598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:14.941155 containerd[1474]: 2025-09-13 00:10:14.922 [INFO][5598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:14.941155 containerd[1474]: 2025-09-13 00:10:14.929 [WARNING][5598] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" HandleID="k8s-pod-network.a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Workload="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:10:14.941155 containerd[1474]: 2025-09-13 00:10:14.929 [INFO][5598] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" HandleID="k8s-pod-network.a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Workload="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:10:14.941155 containerd[1474]: 2025-09-13 00:10:14.931 [INFO][5598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:14.941155 containerd[1474]: 2025-09-13 00:10:14.935 [INFO][5589] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Sep 13 00:10:14.943820 containerd[1474]: time="2025-09-13T00:10:14.942102923Z" level=info msg="TearDown network for sandbox \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\" successfully" Sep 13 00:10:14.943820 containerd[1474]: time="2025-09-13T00:10:14.942143961Z" level=info msg="StopPodSandbox for \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\" returns successfully" Sep 13 00:10:14.943820 containerd[1474]: time="2025-09-13T00:10:14.942901461Z" level=info msg="RemovePodSandbox for \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\"" Sep 13 00:10:14.943820 containerd[1474]: time="2025-09-13T00:10:14.942934815Z" level=info msg="Forcibly stopping sandbox \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\"" Sep 13 00:10:15.114672 containerd[1474]: time="2025-09-13T00:10:15.114489016Z" level=info msg="StartContainer for \"d2230acccdc4e2bc0f5f71556518ce1906e8b2388bb489ef4fcbdf7888bffa2b\" returns successfully" Sep 13 00:10:15.166068 containerd[1474]: 2025-09-13 00:10:15.017 [WARNING][5639] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0", GenerateName:"calico-apiserver-76f56b585c-", Namespace:"calico-apiserver", SelfLink:"", UID:"925826ed-f268-4368-bf97-95adf0976969", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f56b585c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f69f4d50757ef60d078df78c066b9505da4559ad715c6bdbbe3adc810caedc21", Pod:"calico-apiserver-76f56b585c-zr49b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16f1f4fff48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:15.166068 containerd[1474]: 2025-09-13 00:10:15.019 [INFO][5639] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Sep 13 00:10:15.166068 containerd[1474]: 2025-09-13 00:10:15.019 [INFO][5639] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" iface="eth0" netns="" Sep 13 00:10:15.166068 containerd[1474]: 2025-09-13 00:10:15.019 [INFO][5639] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Sep 13 00:10:15.166068 containerd[1474]: 2025-09-13 00:10:15.019 [INFO][5639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Sep 13 00:10:15.166068 containerd[1474]: 2025-09-13 00:10:15.043 [INFO][5658] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" HandleID="k8s-pod-network.a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Workload="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:10:15.166068 containerd[1474]: 2025-09-13 00:10:15.043 [INFO][5658] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:15.166068 containerd[1474]: 2025-09-13 00:10:15.043 [INFO][5658] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:15.166068 containerd[1474]: 2025-09-13 00:10:15.090 [WARNING][5658] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" HandleID="k8s-pod-network.a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Workload="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:10:15.166068 containerd[1474]: 2025-09-13 00:10:15.113 [INFO][5658] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" HandleID="k8s-pod-network.a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Workload="localhost-k8s-calico--apiserver--76f56b585c--zr49b-eth0" Sep 13 00:10:15.166068 containerd[1474]: 2025-09-13 00:10:15.159 [INFO][5658] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:15.166068 containerd[1474]: 2025-09-13 00:10:15.162 [INFO][5639] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0" Sep 13 00:10:15.166565 containerd[1474]: time="2025-09-13T00:10:15.166105597Z" level=info msg="TearDown network for sandbox \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\" successfully" Sep 13 00:10:15.172826 containerd[1474]: time="2025-09-13T00:10:15.172756903Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:10:15.172826 containerd[1474]: time="2025-09-13T00:10:15.172824683Z" level=info msg="RemovePodSandbox \"a83d2f89d20cffb9171c7d6f2d3bf2101a52214199ed5457a69ae7abc873bde0\" returns successfully" Sep 13 00:10:15.173569 containerd[1474]: time="2025-09-13T00:10:15.173519211Z" level=info msg="StopPodSandbox for \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\"" Sep 13 00:10:15.260953 containerd[1474]: 2025-09-13 00:10:15.216 [WARNING][5680] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0", GenerateName:"calico-apiserver-76f56b585c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a73c3ca8-469a-40ee-8971-a5fa4ecc6460", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f56b585c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba", Pod:"calico-apiserver-76f56b585c-gwf2q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92b8ab1575d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:15.260953 containerd[1474]: 2025-09-13 00:10:15.216 [INFO][5680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Sep 13 00:10:15.260953 containerd[1474]: 2025-09-13 00:10:15.216 [INFO][5680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" iface="eth0" netns="" Sep 13 00:10:15.260953 containerd[1474]: 2025-09-13 00:10:15.216 [INFO][5680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Sep 13 00:10:15.260953 containerd[1474]: 2025-09-13 00:10:15.216 [INFO][5680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Sep 13 00:10:15.260953 containerd[1474]: 2025-09-13 00:10:15.244 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" HandleID="k8s-pod-network.760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Workload="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:10:15.260953 containerd[1474]: 2025-09-13 00:10:15.245 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:15.260953 containerd[1474]: 2025-09-13 00:10:15.245 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:15.260953 containerd[1474]: 2025-09-13 00:10:15.252 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" HandleID="k8s-pod-network.760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Workload="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:10:15.260953 containerd[1474]: 2025-09-13 00:10:15.252 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" HandleID="k8s-pod-network.760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Workload="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:10:15.260953 containerd[1474]: 2025-09-13 00:10:15.254 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:15.260953 containerd[1474]: 2025-09-13 00:10:15.257 [INFO][5680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Sep 13 00:10:15.262092 containerd[1474]: time="2025-09-13T00:10:15.260967371Z" level=info msg="TearDown network for sandbox \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\" successfully" Sep 13 00:10:15.262092 containerd[1474]: time="2025-09-13T00:10:15.261015653Z" level=info msg="StopPodSandbox for \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\" returns successfully" Sep 13 00:10:15.262092 containerd[1474]: time="2025-09-13T00:10:15.261721252Z" level=info msg="RemovePodSandbox for \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\"" Sep 13 00:10:15.262092 containerd[1474]: time="2025-09-13T00:10:15.261779044Z" level=info msg="Forcibly stopping sandbox \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\"" Sep 13 00:10:15.342015 containerd[1474]: 2025-09-13 00:10:15.302 [WARNING][5709] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0", GenerateName:"calico-apiserver-76f56b585c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a73c3ca8-469a-40ee-8971-a5fa4ecc6460", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f56b585c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98581872b0b1a4a6aa828f55502150805203e4ad934cb22ed93023487afabfba", Pod:"calico-apiserver-76f56b585c-gwf2q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92b8ab1575d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:15.342015 containerd[1474]: 2025-09-13 00:10:15.303 [INFO][5709] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Sep 13 00:10:15.342015 containerd[1474]: 2025-09-13 00:10:15.303 [INFO][5709] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" iface="eth0" netns="" Sep 13 00:10:15.342015 containerd[1474]: 2025-09-13 00:10:15.303 [INFO][5709] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Sep 13 00:10:15.342015 containerd[1474]: 2025-09-13 00:10:15.303 [INFO][5709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Sep 13 00:10:15.342015 containerd[1474]: 2025-09-13 00:10:15.327 [INFO][5718] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" HandleID="k8s-pod-network.760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Workload="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:10:15.342015 containerd[1474]: 2025-09-13 00:10:15.327 [INFO][5718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:15.342015 containerd[1474]: 2025-09-13 00:10:15.328 [INFO][5718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:15.342015 containerd[1474]: 2025-09-13 00:10:15.334 [WARNING][5718] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" HandleID="k8s-pod-network.760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Workload="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:10:15.342015 containerd[1474]: 2025-09-13 00:10:15.334 [INFO][5718] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" HandleID="k8s-pod-network.760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Workload="localhost-k8s-calico--apiserver--76f56b585c--gwf2q-eth0" Sep 13 00:10:15.342015 containerd[1474]: 2025-09-13 00:10:15.336 [INFO][5718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:15.342015 containerd[1474]: 2025-09-13 00:10:15.339 [INFO][5709] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba" Sep 13 00:10:15.342510 containerd[1474]: time="2025-09-13T00:10:15.342034691Z" level=info msg="TearDown network for sandbox \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\" successfully" Sep 13 00:10:15.494477 containerd[1474]: time="2025-09-13T00:10:15.494391623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:10:15.494653 containerd[1474]: time="2025-09-13T00:10:15.494500592Z" level=info msg="RemovePodSandbox \"760dd151c0bd39d9335e0dadedd2fc500a04c4ce72646c17f301497858823fba\" returns successfully" Sep 13 00:10:15.495159 containerd[1474]: time="2025-09-13T00:10:15.495131850Z" level=info msg="StopPodSandbox for \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\"" Sep 13 00:10:15.828603 containerd[1474]: 2025-09-13 00:10:15.529 [WARNING][5737] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t8tr4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2983a866-de88-4651-bc70-4e6c5a764426", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f", Pod:"csi-node-driver-t8tr4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d17b75e1c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:15.828603 containerd[1474]: 2025-09-13 00:10:15.530 [INFO][5737] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Sep 13 00:10:15.828603 containerd[1474]: 2025-09-13 00:10:15.530 [INFO][5737] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" iface="eth0" netns="" Sep 13 00:10:15.828603 containerd[1474]: 2025-09-13 00:10:15.530 [INFO][5737] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Sep 13 00:10:15.828603 containerd[1474]: 2025-09-13 00:10:15.530 [INFO][5737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Sep 13 00:10:15.828603 containerd[1474]: 2025-09-13 00:10:15.684 [INFO][5745] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" HandleID="k8s-pod-network.da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Workload="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:10:15.828603 containerd[1474]: 2025-09-13 00:10:15.685 [INFO][5745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:15.828603 containerd[1474]: 2025-09-13 00:10:15.685 [INFO][5745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:15.828603 containerd[1474]: 2025-09-13 00:10:15.820 [WARNING][5745] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" HandleID="k8s-pod-network.da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Workload="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:10:15.828603 containerd[1474]: 2025-09-13 00:10:15.820 [INFO][5745] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" HandleID="k8s-pod-network.da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Workload="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:10:15.828603 containerd[1474]: 2025-09-13 00:10:15.822 [INFO][5745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:15.828603 containerd[1474]: 2025-09-13 00:10:15.825 [INFO][5737] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Sep 13 00:10:15.828603 containerd[1474]: time="2025-09-13T00:10:15.828509708Z" level=info msg="TearDown network for sandbox \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\" successfully" Sep 13 00:10:15.828603 containerd[1474]: time="2025-09-13T00:10:15.828537080Z" level=info msg="StopPodSandbox for \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\" returns successfully" Sep 13 00:10:15.867263 containerd[1474]: time="2025-09-13T00:10:15.829088635Z" level=info msg="RemovePodSandbox for \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\"" Sep 13 00:10:15.867263 containerd[1474]: time="2025-09-13T00:10:15.829114956Z" level=info msg="Forcibly stopping sandbox \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\"" Sep 13 00:10:15.867400 kubelet[2553]: E0913 00:10:15.853324 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:15.987987 kubelet[2553]: I0913 00:10:15.987915 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bqltl" podStartSLOduration=61.98788051 podStartE2EDuration="1m1.98788051s" podCreationTimestamp="2025-09-13 00:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:15.986446586 +0000 UTC m=+66.998607997" watchObservedRunningTime="2025-09-13 00:10:15.98788051 +0000 UTC m=+67.000041921" Sep 13 00:10:16.195219 containerd[1474]: 2025-09-13 00:10:16.025 [WARNING][5762] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t8tr4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2983a866-de88-4651-bc70-4e6c5a764426", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f", Pod:"csi-node-driver-t8tr4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d17b75e1c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:10:16.195219 containerd[1474]: 2025-09-13 00:10:16.026 [INFO][5762] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Sep 13 00:10:16.195219 containerd[1474]: 2025-09-13 00:10:16.026 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" iface="eth0" netns="" Sep 13 00:10:16.195219 containerd[1474]: 2025-09-13 00:10:16.026 [INFO][5762] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Sep 13 00:10:16.195219 containerd[1474]: 2025-09-13 00:10:16.026 [INFO][5762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Sep 13 00:10:16.195219 containerd[1474]: 2025-09-13 00:10:16.053 [INFO][5773] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" HandleID="k8s-pod-network.da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Workload="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:10:16.195219 containerd[1474]: 2025-09-13 00:10:16.053 [INFO][5773] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:10:16.195219 containerd[1474]: 2025-09-13 00:10:16.053 [INFO][5773] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:10:16.195219 containerd[1474]: 2025-09-13 00:10:16.185 [WARNING][5773] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" HandleID="k8s-pod-network.da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Workload="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:10:16.195219 containerd[1474]: 2025-09-13 00:10:16.185 [INFO][5773] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" HandleID="k8s-pod-network.da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Workload="localhost-k8s-csi--node--driver--t8tr4-eth0" Sep 13 00:10:16.195219 containerd[1474]: 2025-09-13 00:10:16.188 [INFO][5773] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:10:16.195219 containerd[1474]: 2025-09-13 00:10:16.191 [INFO][5762] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766" Sep 13 00:10:16.195736 containerd[1474]: time="2025-09-13T00:10:16.195260915Z" level=info msg="TearDown network for sandbox \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\" successfully" Sep 13 00:10:16.521813 containerd[1474]: time="2025-09-13T00:10:16.521622766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:10:16.521813 containerd[1474]: time="2025-09-13T00:10:16.521731885Z" level=info msg="RemovePodSandbox \"da84ae1e1c8cc427909e9f45ac15ce79e123cc48a410002418845d06653a7766\" returns successfully" Sep 13 00:10:16.862759 kubelet[2553]: E0913 00:10:16.862590 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:17.873486 kubelet[2553]: E0913 00:10:17.873442 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:18.806230 systemd[1]: Started sshd@13-10.0.0.89:22-10.0.0.1:41562.service - OpenSSH per-connection server daemon (10.0.0.1:41562). Sep 13 00:10:18.871145 sshd[5792]: Accepted publickey for core from 10.0.0.1 port 41562 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:18.873352 sshd[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:18.881812 systemd-logind[1449]: New session 14 of user core. Sep 13 00:10:18.887367 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:10:19.355092 containerd[1474]: time="2025-09-13T00:10:19.354847961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:19.356253 containerd[1474]: time="2025-09-13T00:10:19.356188811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:10:19.357920 containerd[1474]: time="2025-09-13T00:10:19.357867396Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:19.362700 containerd[1474]: time="2025-09-13T00:10:19.362651132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:19.363612 containerd[1474]: time="2025-09-13T00:10:19.363561460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 7.184314355s" Sep 13 00:10:19.363721 containerd[1474]: time="2025-09-13T00:10:19.363616606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:10:19.365522 containerd[1474]: time="2025-09-13T00:10:19.365471468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:10:19.377058 containerd[1474]: time="2025-09-13T00:10:19.374240955Z" level=info msg="CreateContainer within sandbox \"77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:10:19.415932 sshd[5792]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:19.425786 systemd[1]: sshd@13-10.0.0.89:22-10.0.0.1:41562.service: Deactivated successfully. Sep 13 00:10:19.427120 containerd[1474]: time="2025-09-13T00:10:19.427070072Z" level=info msg="CreateContainer within sandbox \"77540a0458fa592836434eb02480f88a80bca97070be60e61468e5def05307c7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7e309e8a31e6aa875d9c993c60a8abc8ff391e7800a42951a91c6801b1e3508a\"" Sep 13 00:10:19.427771 containerd[1474]: time="2025-09-13T00:10:19.427740923Z" level=info msg="StartContainer for \"7e309e8a31e6aa875d9c993c60a8abc8ff391e7800a42951a91c6801b1e3508a\"" Sep 13 00:10:19.428514 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:10:19.432264 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:10:19.439512 systemd[1]: Started sshd@14-10.0.0.89:22-10.0.0.1:41566.service - OpenSSH per-connection server daemon (10.0.0.1:41566). Sep 13 00:10:19.441063 systemd-logind[1449]: Removed session 14. Sep 13 00:10:19.479354 sshd[5813]: Accepted publickey for core from 10.0.0.1 port 41566 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:19.481779 sshd[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:19.487742 systemd-logind[1449]: New session 15 of user core. Sep 13 00:10:19.496311 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:10:19.537219 systemd[1]: Started cri-containerd-7e309e8a31e6aa875d9c993c60a8abc8ff391e7800a42951a91c6801b1e3508a.scope - libcontainer container 7e309e8a31e6aa875d9c993c60a8abc8ff391e7800a42951a91c6801b1e3508a. Sep 13 00:10:19.742032 containerd[1474]: time="2025-09-13T00:10:19.741943832Z" level=info msg="StartContainer for \"7e309e8a31e6aa875d9c993c60a8abc8ff391e7800a42951a91c6801b1e3508a\" returns successfully" Sep 13 00:10:19.922593 kubelet[2553]: I0913 00:10:19.921918 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7854f6d79d-nbc6q" podStartSLOduration=27.588640357 podStartE2EDuration="48.921891926s" podCreationTimestamp="2025-09-13 00:09:31 +0000 UTC" firstStartedPulling="2025-09-13 00:09:58.031338947 +0000 UTC m=+49.043500358" lastFinishedPulling="2025-09-13 00:10:19.364590516 +0000 UTC m=+70.376751927" observedRunningTime="2025-09-13 00:10:19.915925932 +0000 UTC m=+70.928087363" watchObservedRunningTime="2025-09-13 00:10:19.921891926 +0000 UTC m=+70.934053337" Sep 13 00:10:20.006292 sshd[5813]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:20.025448 systemd[1]: Started sshd@15-10.0.0.89:22-10.0.0.1:44922.service - OpenSSH per-connection server daemon (10.0.0.1:44922). Sep 13 00:10:20.027916 systemd[1]: sshd@14-10.0.0.89:22-10.0.0.1:41566.service: Deactivated successfully. Sep 13 00:10:20.034768 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:10:20.040033 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:10:20.044344 systemd-logind[1449]: Removed session 15. Sep 13 00:10:20.079472 sshd[5894]: Accepted publickey for core from 10.0.0.1 port 44922 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:20.081965 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:20.089879 systemd-logind[1449]: New session 16 of user core. Sep 13 00:10:20.095758 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:10:20.229408 sshd[5894]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:20.234681 systemd[1]: sshd@15-10.0.0.89:22-10.0.0.1:44922.service: Deactivated successfully. Sep 13 00:10:20.237590 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:10:20.238318 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:10:20.239376 systemd-logind[1449]: Removed session 16. Sep 13 00:10:20.913851 systemd[1]: run-containerd-runc-k8s.io-7e309e8a31e6aa875d9c993c60a8abc8ff391e7800a42951a91c6801b1e3508a-runc.KcHU8I.mount: Deactivated successfully. Sep 13 00:10:22.085592 kubelet[2553]: E0913 00:10:22.084847 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:22.158370 containerd[1474]: time="2025-09-13T00:10:22.158264664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:22.159488 containerd[1474]: time="2025-09-13T00:10:22.159407514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:10:22.162920 containerd[1474]: time="2025-09-13T00:10:22.162850360Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:22.166483 containerd[1474]: time="2025-09-13T00:10:22.166402024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:22.167262 containerd[1474]: time="2025-09-13T00:10:22.167205957Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.801665247s" Sep 13 00:10:22.167330 containerd[1474]: time="2025-09-13T00:10:22.167269999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:10:22.169240 containerd[1474]: time="2025-09-13T00:10:22.169180733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:10:22.171770 containerd[1474]: time="2025-09-13T00:10:22.171720257Z" level=info msg="CreateContainer within sandbox \"27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:10:22.202955 containerd[1474]: time="2025-09-13T00:10:22.202882230Z" level=info msg="CreateContainer within sandbox \"27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"85e350596ba8efcf6e83777c9b1407069ef465247600135029fea445633a6ab0\"" Sep 13 00:10:22.204232 containerd[1474]: time="2025-09-13T00:10:22.203720308Z" level=info msg="StartContainer for \"85e350596ba8efcf6e83777c9b1407069ef465247600135029fea445633a6ab0\"" Sep 13 00:10:22.250397 systemd[1]: Started cri-containerd-85e350596ba8efcf6e83777c9b1407069ef465247600135029fea445633a6ab0.scope - libcontainer container 85e350596ba8efcf6e83777c9b1407069ef465247600135029fea445633a6ab0. Sep 13 00:10:22.298459 containerd[1474]: time="2025-09-13T00:10:22.298386278Z" level=info msg="StartContainer for \"85e350596ba8efcf6e83777c9b1407069ef465247600135029fea445633a6ab0\" returns successfully" Sep 13 00:10:23.562848 kubelet[2553]: I0913 00:10:23.562781 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:10:25.242457 systemd[1]: Started sshd@16-10.0.0.89:22-10.0.0.1:44934.service - OpenSSH per-connection server daemon (10.0.0.1:44934). Sep 13 00:10:25.435560 sshd[5978]: Accepted publickey for core from 10.0.0.1 port 44934 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:25.437913 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:25.443578 systemd-logind[1449]: New session 17 of user core. Sep 13 00:10:25.453209 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:10:25.858835 sshd[5978]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:25.865337 systemd[1]: sshd@16-10.0.0.89:22-10.0.0.1:44934.service: Deactivated successfully. Sep 13 00:10:25.868941 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:10:25.869827 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:10:25.871235 systemd-logind[1449]: Removed session 17. Sep 13 00:10:27.084451 kubelet[2553]: E0913 00:10:27.084403 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:28.085094 kubelet[2553]: E0913 00:10:28.085043 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:28.561573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3875515181.mount: Deactivated successfully. Sep 13 00:10:29.422120 containerd[1474]: time="2025-09-13T00:10:29.422040794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:29.428061 containerd[1474]: time="2025-09-13T00:10:29.427982708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:10:29.430252 containerd[1474]: time="2025-09-13T00:10:29.430156823Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:29.437159 containerd[1474]: time="2025-09-13T00:10:29.437078581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:29.438252 containerd[1474]: time="2025-09-13T00:10:29.438200766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 7.268958975s" Sep 13 00:10:29.438311 containerd[1474]: time="2025-09-13T00:10:29.438255700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:10:29.439709 containerd[1474]: time="2025-09-13T00:10:29.439622490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:10:29.441440 containerd[1474]: time="2025-09-13T00:10:29.440911282Z" level=info msg="CreateContainer within sandbox \"f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:10:29.486447 containerd[1474]: time="2025-09-13T00:10:29.486362916Z" level=info msg="CreateContainer within sandbox \"f4a22cb5d10f79255c962562b77b8f68a04c2655d62632b7b29611c164c81fef\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f0a9ae0291fd915da9fcb81b9f00bbadd56d54ba9ec962abea7160a54e1cfadf\"" Sep 13 00:10:29.487229 containerd[1474]: time="2025-09-13T00:10:29.487175672Z" level=info msg="StartContainer for \"f0a9ae0291fd915da9fcb81b9f00bbadd56d54ba9ec962abea7160a54e1cfadf\"" Sep 13 00:10:29.537370 systemd[1]: Started cri-containerd-f0a9ae0291fd915da9fcb81b9f00bbadd56d54ba9ec962abea7160a54e1cfadf.scope - libcontainer container f0a9ae0291fd915da9fcb81b9f00bbadd56d54ba9ec962abea7160a54e1cfadf. Sep 13 00:10:29.592859 containerd[1474]: time="2025-09-13T00:10:29.592667220Z" level=info msg="StartContainer for \"f0a9ae0291fd915da9fcb81b9f00bbadd56d54ba9ec962abea7160a54e1cfadf\" returns successfully" Sep 13 00:10:29.923700 kubelet[2553]: I0913 00:10:29.923483 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6b6bbf4cfb-7ddpj" podStartSLOduration=2.389276289 podStartE2EDuration="33.923453652s" podCreationTimestamp="2025-09-13 00:09:56 +0000 UTC" firstStartedPulling="2025-09-13 00:09:57.905199079 +0000 UTC m=+48.917360490" lastFinishedPulling="2025-09-13 00:10:29.439376432 +0000 UTC m=+80.451537853" observedRunningTime="2025-09-13 00:10:29.923267528 +0000 UTC m=+80.935428939" watchObservedRunningTime="2025-09-13 00:10:29.923453652 +0000 UTC m=+80.935615073" Sep 13 00:10:30.874920 systemd[1]: Started sshd@17-10.0.0.89:22-10.0.0.1:44052.service - OpenSSH per-connection server daemon (10.0.0.1:44052). Sep 13 00:10:30.946460 sshd[6070]: Accepted publickey for core from 10.0.0.1 port 44052 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:30.948808 sshd[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:30.954196 systemd-logind[1449]: New session 18 of user core. Sep 13 00:10:30.965461 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:10:31.429941 sshd[6070]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:31.436436 systemd[1]: sshd@17-10.0.0.89:22-10.0.0.1:44052.service: Deactivated successfully. Sep 13 00:10:31.439273 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:10:31.440130 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:10:31.441253 systemd-logind[1449]: Removed session 18. Sep 13 00:10:31.626059 containerd[1474]: time="2025-09-13T00:10:31.625950042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:31.683029 containerd[1474]: time="2025-09-13T00:10:31.682783408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:10:31.684314 containerd[1474]: time="2025-09-13T00:10:31.684270394Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:31.734140 containerd[1474]: time="2025-09-13T00:10:31.734065992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:31.735061 containerd[1474]: time="2025-09-13T00:10:31.734988075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.295316091s" Sep 13 00:10:31.735061 containerd[1474]: time="2025-09-13T00:10:31.735051675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:10:31.743119 containerd[1474]: time="2025-09-13T00:10:31.743061523Z" level=info msg="CreateContainer within sandbox \"27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:10:31.764350 containerd[1474]: time="2025-09-13T00:10:31.764295431Z" level=info msg="CreateContainer within sandbox \"27c5048220a90d5b6431525d855bb196ade75bb0cd9e6b4dac34a3cba76afb1f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8e0c518f026b933cd2971e5cf0d67767ccd7d6f351c36d4016898bcc524e14ce\"" Sep 13 00:10:31.764878 containerd[1474]: time="2025-09-13T00:10:31.764849003Z" level=info msg="StartContainer for \"8e0c518f026b933cd2971e5cf0d67767ccd7d6f351c36d4016898bcc524e14ce\"" Sep 13 00:10:31.831205 systemd[1]: Started cri-containerd-8e0c518f026b933cd2971e5cf0d67767ccd7d6f351c36d4016898bcc524e14ce.scope - libcontainer container 8e0c518f026b933cd2971e5cf0d67767ccd7d6f351c36d4016898bcc524e14ce. Sep 13 00:10:31.869942 containerd[1474]: time="2025-09-13T00:10:31.869882093Z" level=info msg="StartContainer for \"8e0c518f026b933cd2971e5cf0d67767ccd7d6f351c36d4016898bcc524e14ce\" returns successfully" Sep 13 00:10:31.929421 kubelet[2553]: I0913 00:10:31.929333 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t8tr4" podStartSLOduration=27.261341037 podStartE2EDuration="1m0.92930416s" podCreationTimestamp="2025-09-13 00:09:31 +0000 UTC" firstStartedPulling="2025-09-13 00:09:58.067929301 +0000 UTC m=+49.080090712" lastFinishedPulling="2025-09-13 00:10:31.735892424 +0000 UTC m=+82.748053835" observedRunningTime="2025-09-13 00:10:31.928521282 +0000 UTC m=+82.940682713" watchObservedRunningTime="2025-09-13 00:10:31.92930416 +0000 UTC m=+82.941465571" Sep 13 00:10:32.408168 kubelet[2553]: I0913 00:10:32.408071 2553 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:10:32.411046 kubelet[2553]: I0913 00:10:32.411027 2553 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:10:36.443240 systemd[1]: Started sshd@18-10.0.0.89:22-10.0.0.1:44070.service - OpenSSH per-connection server daemon (10.0.0.1:44070). Sep 13 00:10:36.649404 sshd[6130]: Accepted publickey for core from 10.0.0.1 port 44070 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:36.651501 sshd[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:36.664861 systemd-logind[1449]: New session 19 of user core. Sep 13 00:10:36.669197 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:10:36.940754 sshd[6130]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:36.946172 systemd[1]: sshd@18-10.0.0.89:22-10.0.0.1:44070.service: Deactivated successfully. Sep 13 00:10:36.948559 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:10:36.949383 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:10:36.950556 systemd-logind[1449]: Removed session 19. Sep 13 00:10:41.953490 systemd[1]: Started sshd@19-10.0.0.89:22-10.0.0.1:41720.service - OpenSSH per-connection server daemon (10.0.0.1:41720). Sep 13 00:10:42.026402 sshd[6192]: Accepted publickey for core from 10.0.0.1 port 41720 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:42.028767 sshd[6192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:42.034032 systemd-logind[1449]: New session 20 of user core. Sep 13 00:10:42.041301 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:10:42.084399 kubelet[2553]: E0913 00:10:42.084317 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:42.254490 sshd[6192]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:42.269282 systemd[1]: sshd@19-10.0.0.89:22-10.0.0.1:41720.service: Deactivated successfully. Sep 13 00:10:42.272471 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:10:42.274448 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:10:42.290488 systemd[1]: Started sshd@20-10.0.0.89:22-10.0.0.1:41730.service - OpenSSH per-connection server daemon (10.0.0.1:41730). Sep 13 00:10:42.291738 systemd-logind[1449]: Removed session 20. Sep 13 00:10:42.327284 sshd[6208]: Accepted publickey for core from 10.0.0.1 port 41730 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:42.329169 sshd[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:42.334021 systemd-logind[1449]: New session 21 of user core. Sep 13 00:10:42.341176 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:10:43.256365 sshd[6208]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:43.270875 systemd[1]: sshd@20-10.0.0.89:22-10.0.0.1:41730.service: Deactivated successfully. Sep 13 00:10:43.273505 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:10:43.275432 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:10:43.282364 systemd[1]: Started sshd@21-10.0.0.89:22-10.0.0.1:41754.service - OpenSSH per-connection server daemon (10.0.0.1:41754). Sep 13 00:10:43.283407 systemd-logind[1449]: Removed session 21. Sep 13 00:10:43.323066 sshd[6220]: Accepted publickey for core from 10.0.0.1 port 41754 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:43.324852 sshd[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:43.329639 systemd-logind[1449]: New session 22 of user core. Sep 13 00:10:43.339208 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:10:45.202300 sshd[6220]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:45.212143 systemd[1]: sshd@21-10.0.0.89:22-10.0.0.1:41754.service: Deactivated successfully. Sep 13 00:10:45.215326 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:10:45.217356 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:10:45.226142 systemd[1]: Started sshd@22-10.0.0.89:22-10.0.0.1:41768.service - OpenSSH per-connection server daemon (10.0.0.1:41768). Sep 13 00:10:45.227202 systemd-logind[1449]: Removed session 22. Sep 13 00:10:45.268081 sshd[6263]: Accepted publickey for core from 10.0.0.1 port 41768 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:45.269961 sshd[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:45.275077 systemd-logind[1449]: New session 23 of user core. Sep 13 00:10:45.285316 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:10:46.306586 sshd[6263]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:46.318141 systemd[1]: sshd@22-10.0.0.89:22-10.0.0.1:41768.service: Deactivated successfully. Sep 13 00:10:46.321126 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:10:46.323726 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:10:46.332862 systemd[1]: Started sshd@23-10.0.0.89:22-10.0.0.1:41772.service - OpenSSH per-connection server daemon (10.0.0.1:41772). Sep 13 00:10:46.335184 systemd-logind[1449]: Removed session 23. Sep 13 00:10:46.370412 sshd[6279]: Accepted publickey for core from 10.0.0.1 port 41772 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:46.372300 sshd[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:46.376543 systemd-logind[1449]: New session 24 of user core. Sep 13 00:10:46.383171 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:10:46.543322 sshd[6279]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:46.548825 systemd[1]: sshd@23-10.0.0.89:22-10.0.0.1:41772.service: Deactivated successfully. Sep 13 00:10:46.551400 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:10:46.552194 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:10:46.553157 systemd-logind[1449]: Removed session 24. Sep 13 00:10:51.554135 systemd[1]: Started sshd@24-10.0.0.89:22-10.0.0.1:54000.service - OpenSSH per-connection server daemon (10.0.0.1:54000). Sep 13 00:10:51.593780 sshd[6295]: Accepted publickey for core from 10.0.0.1 port 54000 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:51.595987 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:51.600929 systemd-logind[1449]: New session 25 of user core. Sep 13 00:10:51.614311 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:10:51.729301 sshd[6295]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:51.734453 systemd[1]: sshd@24-10.0.0.89:22-10.0.0.1:54000.service: Deactivated successfully. Sep 13 00:10:51.737641 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:10:51.738612 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:10:51.739860 systemd-logind[1449]: Removed session 25. Sep 13 00:10:56.742807 systemd[1]: Started sshd@25-10.0.0.89:22-10.0.0.1:54022.service - OpenSSH per-connection server daemon (10.0.0.1:54022). Sep 13 00:10:56.805227 sshd[6314]: Accepted publickey for core from 10.0.0.1 port 54022 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:10:56.807854 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:56.817175 systemd-logind[1449]: New session 26 of user core. Sep 13 00:10:56.823193 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:10:57.048061 sshd[6314]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:57.053427 systemd[1]: sshd@25-10.0.0.89:22-10.0.0.1:54022.service: Deactivated successfully. Sep 13 00:10:57.056456 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:10:57.058617 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:10:57.060585 systemd-logind[1449]: Removed session 26. Sep 13 00:11:02.069383 systemd[1]: Started sshd@26-10.0.0.89:22-10.0.0.1:57024.service - OpenSSH per-connection server daemon (10.0.0.1:57024). Sep 13 00:11:02.109500 sshd[6352]: Accepted publickey for core from 10.0.0.1 port 57024 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:11:02.111820 sshd[6352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:02.116670 systemd-logind[1449]: New session 27 of user core. Sep 13 00:11:02.123201 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:11:02.243198 sshd[6352]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:02.248238 systemd[1]: sshd@26-10.0.0.89:22-10.0.0.1:57024.service: Deactivated successfully. Sep 13 00:11:02.250727 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:11:02.251484 systemd-logind[1449]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:11:02.252473 systemd-logind[1449]: Removed session 27. Sep 13 00:11:07.254454 systemd[1]: Started sshd@27-10.0.0.89:22-10.0.0.1:57054.service - OpenSSH per-connection server daemon (10.0.0.1:57054). Sep 13 00:11:07.302059 sshd[6366]: Accepted publickey for core from 10.0.0.1 port 57054 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:11:07.304511 sshd[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:07.309696 systemd-logind[1449]: New session 28 of user core. Sep 13 00:11:07.316161 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 13 00:11:07.467495 sshd[6366]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:07.471467 systemd[1]: sshd@27-10.0.0.89:22-10.0.0.1:57054.service: Deactivated successfully. Sep 13 00:11:07.473911 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:11:07.474523 systemd-logind[1449]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:11:07.475589 systemd-logind[1449]: Removed session 28. Sep 13 00:11:09.084769 kubelet[2553]: E0913 00:11:09.084732 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:12.491548 systemd[1]: Started sshd@28-10.0.0.89:22-10.0.0.1:59438.service - OpenSSH per-connection server daemon (10.0.0.1:59438). Sep 13 00:11:12.541615 sshd[6424]: Accepted publickey for core from 10.0.0.1 port 59438 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:11:12.544127 sshd[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:12.549729 systemd-logind[1449]: New session 29 of user core. Sep 13 00:11:12.557171 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 13 00:11:12.758588 sshd[6424]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:12.763814 systemd[1]: sshd@28-10.0.0.89:22-10.0.0.1:59438.service: Deactivated successfully. Sep 13 00:11:12.766892 systemd[1]: session-29.scope: Deactivated successfully. Sep 13 00:11:12.768662 systemd-logind[1449]: Session 29 logged out. Waiting for processes to exit. Sep 13 00:11:12.770259 systemd-logind[1449]: Removed session 29.