Nov 1 00:12:58.996159 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:12:58.996181 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:12:58.996193 kernel: BIOS-provided physical RAM map: Nov 1 00:12:58.996200 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:12:58.996206 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:12:58.996212 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:12:58.996219 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 1 00:12:58.996226 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 1 00:12:58.996232 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:12:58.996241 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 00:12:58.996247 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:12:58.996254 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:12:58.996264 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:12:58.996270 kernel: NX (Execute Disable) protection: active Nov 1 00:12:58.996278 kernel: APIC: Static calls initialized Nov 1 00:12:58.996292 kernel: SMBIOS 2.8 present. Nov 1 00:12:58.996299 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 1 00:12:58.996305 kernel: Hypervisor detected: KVM Nov 1 00:12:58.996312 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:12:58.996319 kernel: kvm-clock: using sched offset of 4120659057 cycles Nov 1 00:12:58.996326 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:12:58.996333 kernel: tsc: Detected 2794.748 MHz processor Nov 1 00:12:58.996341 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:12:58.996351 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:12:58.996370 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 1 00:12:58.996380 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 00:12:58.996389 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:12:58.996398 kernel: Using GB pages for direct mapping Nov 1 00:12:58.996407 kernel: ACPI: Early table checksum verification disabled Nov 1 00:12:58.996416 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 1 00:12:58.996425 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:12:58.996432 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:12:58.996440 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:12:58.996450 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 1 00:12:58.996457 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:12:58.996464 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:12:58.996471 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:12:58.996478 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:12:58.996485 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 1 00:12:58.996492 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 1 00:12:58.996503 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 1 00:12:58.996513 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 1 00:12:58.996520 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 1 00:12:58.996527 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 1 00:12:58.996534 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 1 00:12:58.996541 kernel: No NUMA configuration found Nov 1 00:12:58.996548 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 1 00:12:58.996558 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 1 00:12:58.996565 kernel: Zone ranges: Nov 1 00:12:58.996573 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:12:58.996580 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 1 00:12:58.996587 kernel: Normal empty Nov 1 00:12:58.996594 kernel: Movable zone start for each node Nov 1 00:12:58.996601 kernel: Early memory node ranges Nov 1 00:12:58.996609 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:12:58.996616 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 1 00:12:58.996623 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 1 00:12:58.996633 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:12:58.996644 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:12:58.996652 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 1 00:12:58.996659 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:12:58.996667 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:12:58.996677 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:12:58.996702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:12:58.996713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:12:58.996721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:12:58.996732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:12:58.996739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:12:58.996747 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:12:58.996754 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:12:58.996761 kernel: TSC deadline timer available Nov 1 00:12:58.996768 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 1 00:12:58.996776 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:12:58.996783 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:12:58.996793 kernel: kvm-guest: setup PV sched yield Nov 1 00:12:58.996803 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 00:12:58.996810 kernel: Booting paravirtualized kernel on KVM Nov 1 00:12:58.996818 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:12:58.996825 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 1 00:12:58.996832 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 1 00:12:58.996840 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 1 00:12:58.996847 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 00:12:58.996854 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:12:58.996861 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:12:58.996873 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:12:58.996880 kernel: random: crng init done Nov 1 00:12:58.996888 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:12:58.996895 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:12:58.996902 kernel: Fallback order for Node 0: 0 Nov 1 00:12:58.996910 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 1 00:12:58.996917 kernel: Policy zone: DMA32 Nov 1 00:12:58.996924 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:12:58.996934 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 136900K reserved, 0K cma-reserved) Nov 1 00:12:58.996942 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 00:12:58.996949 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:12:58.996956 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:12:58.996963 kernel: Dynamic Preempt: voluntary Nov 1 00:12:58.996971 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:12:58.996979 kernel: rcu: RCU event tracing is enabled. Nov 1 00:12:58.996986 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 00:12:58.996994 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:12:58.997004 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:12:58.997011 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:12:58.997019 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:12:58.997026 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 00:12:58.997035 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 00:12:58.997043 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:12:58.997050 kernel: Console: colour VGA+ 80x25 Nov 1 00:12:58.997057 kernel: printk: console [ttyS0] enabled Nov 1 00:12:58.997064 kernel: ACPI: Core revision 20230628 Nov 1 00:12:58.997074 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:12:58.997082 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:12:58.997089 kernel: x2apic enabled Nov 1 00:12:58.997096 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:12:58.997104 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 00:12:58.997117 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 00:12:58.997130 kernel: kvm-guest: setup PV IPIs Nov 1 00:12:58.997140 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:12:58.997175 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:12:58.997185 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 1 00:12:58.997195 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:12:58.997205 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:12:58.997218 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:12:58.997228 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:12:58.997238 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:12:58.997249 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:12:58.997257 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 00:12:58.997268 kernel: active return thunk: retbleed_return_thunk Nov 1 00:12:58.997275 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 00:12:58.997287 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:12:58.997295 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:12:58.997303 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 00:12:58.997311 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 00:12:58.997321 kernel: active return thunk: srso_return_thunk Nov 1 00:12:58.997329 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 00:12:58.997339 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:12:58.997347 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:12:58.997355 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:12:58.997362 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:12:58.997370 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 1 00:12:58.997377 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:12:58.997385 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:12:58.997392 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:12:58.997400 kernel: landlock: Up and running. Nov 1 00:12:58.997410 kernel: SELinux: Initializing. Nov 1 00:12:58.997418 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:12:58.997425 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:12:58.997433 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 00:12:58.997441 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:12:58.997448 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:12:58.997456 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:12:58.997464 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:12:58.997474 kernel: ... version: 0 Nov 1 00:12:58.997484 kernel: ... bit width: 48 Nov 1 00:12:58.997492 kernel: ... generic registers: 6 Nov 1 00:12:58.997499 kernel: ... value mask: 0000ffffffffffff Nov 1 00:12:58.997507 kernel: ... max period: 00007fffffffffff Nov 1 00:12:58.997514 kernel: ... fixed-purpose events: 0 Nov 1 00:12:58.997522 kernel: ... event mask: 000000000000003f Nov 1 00:12:58.997530 kernel: signal: max sigframe size: 1776 Nov 1 00:12:58.997537 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:12:58.997545 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:12:58.997555 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:12:58.997563 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:12:58.997571 kernel: .... node #0, CPUs: #1 #2 #3 Nov 1 00:12:58.997578 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 00:12:58.997586 kernel: smpboot: Max logical packages: 1 Nov 1 00:12:58.997593 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 1 00:12:58.997601 kernel: devtmpfs: initialized Nov 1 00:12:58.997608 kernel: x86/mm: Memory block size: 128MB Nov 1 00:12:58.997616 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:12:58.997626 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 00:12:58.997634 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:12:58.997641 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:12:58.997649 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:12:58.997657 kernel: audit: type=2000 audit(1761955978.141:1): state=initialized audit_enabled=0 res=1 Nov 1 00:12:58.997664 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:12:58.997677 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:12:58.997717 kernel: cpuidle: using governor menu Nov 1 00:12:58.997728 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:12:58.997743 kernel: dca service started, version 1.12.1 Nov 1 00:12:58.997751 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:12:58.997759 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 00:12:58.997767 kernel: PCI: Using configuration type 1 for base access Nov 1 00:12:58.997774 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:12:58.997782 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:12:58.997790 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:12:58.997797 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:12:58.997805 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:12:58.997815 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:12:58.997822 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:12:58.997830 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:12:58.997837 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:12:58.997845 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:12:58.997852 kernel: ACPI: Interpreter enabled Nov 1 00:12:58.997860 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:12:58.997867 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:12:58.997875 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:12:58.997885 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:12:58.997892 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:12:58.997900 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:12:58.998137 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:12:58.998324 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:12:58.998478 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:12:58.998489 kernel: PCI host bridge to bus 0000:00 Nov 1 00:12:58.998642 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:12:58.998809 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:12:58.998940 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:12:58.999057 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 1 00:12:58.999185 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:12:58.999302 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 1 00:12:58.999420 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:12:58.999586 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:12:58.999773 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 1 00:12:58.999907 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 1 00:12:59.000034 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 1 00:12:59.000170 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 1 00:12:59.000299 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:12:59.000448 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:12:59.000584 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 1 00:12:59.000738 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 1 00:12:59.000871 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 1 00:12:59.001018 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:12:59.001157 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 00:12:59.001318 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 1 00:12:59.001487 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 1 00:12:59.001713 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:12:59.001859 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 1 00:12:59.001988 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 1 00:12:59.002114 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 1 00:12:59.002254 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 1 00:12:59.002401 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:12:59.002537 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:12:59.002719 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:12:59.002913 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 1 00:12:59.003110 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 1 00:12:59.003276 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:12:59.003408 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 00:12:59.003419 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:12:59.003433 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:12:59.003441 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:12:59.003449 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:12:59.003457 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:12:59.003464 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:12:59.003472 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:12:59.003480 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:12:59.003487 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:12:59.003495 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:12:59.003506 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:12:59.003514 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:12:59.003522 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:12:59.003530 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:12:59.003538 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:12:59.003545 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:12:59.003553 kernel: iommu: Default domain type: Translated Nov 1 00:12:59.003561 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:12:59.003568 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:12:59.003579 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:12:59.003587 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:12:59.003594 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 1 00:12:59.003761 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:12:59.003908 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:12:59.004052 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:12:59.004062 kernel: vgaarb: loaded Nov 1 00:12:59.004070 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:12:59.004083 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:12:59.004091 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:12:59.004099 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:12:59.004106 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:12:59.004114 kernel: pnp: PnP ACPI init Nov 1 00:12:59.004296 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:12:59.004308 kernel: pnp: PnP ACPI: found 6 devices Nov 1 00:12:59.004316 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:12:59.004324 kernel: NET: Registered PF_INET protocol family Nov 1 00:12:59.004336 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:12:59.004344 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:12:59.004352 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:12:59.004359 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:12:59.004367 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 00:12:59.004376 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:12:59.004386 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:12:59.004397 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:12:59.004412 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:12:59.004423 kernel: NET: Registered PF_XDP protocol family Nov 1 00:12:59.004559 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:12:59.004823 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:12:59.004974 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:12:59.005116 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 1 00:12:59.005269 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:12:59.005387 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 1 00:12:59.005398 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:12:59.005412 kernel: Initialise system trusted keyrings Nov 1 00:12:59.005420 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:12:59.005427 kernel: Key type asymmetric registered Nov 1 00:12:59.005435 kernel: Asymmetric key parser 'x509' registered Nov 1 00:12:59.005443 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:12:59.005451 kernel: io scheduler mq-deadline registered Nov 1 00:12:59.005459 kernel: io scheduler kyber registered Nov 1 00:12:59.005466 kernel: io scheduler bfq registered Nov 1 00:12:59.005474 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:12:59.005485 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:12:59.005493 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:12:59.005501 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 00:12:59.005509 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:12:59.005517 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:12:59.005525 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:12:59.005533 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:12:59.005540 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:12:59.005701 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 00:12:59.005722 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:12:59.005849 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 00:12:59.005969 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T00:12:58 UTC (1761955978) Nov 1 00:12:59.006087 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:12:59.006097 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 00:12:59.006105 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:12:59.006113 kernel: Segment Routing with IPv6 Nov 1 00:12:59.006120 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:12:59.006132 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:12:59.006140 kernel: Key type dns_resolver registered Nov 1 00:12:59.006157 kernel: IPI shorthand broadcast: enabled Nov 1 00:12:59.006165 kernel: sched_clock: Marking stable (1250003491, 206147533)->(1709553704, -253402680) Nov 1 00:12:59.006173 kernel: registered taskstats version 1 Nov 1 00:12:59.006181 kernel: Loading compiled-in X.509 certificates Nov 1 00:12:59.006189 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:12:59.006197 kernel: Key type .fscrypt registered Nov 1 00:12:59.006205 kernel: Key type fscrypt-provisioning registered Nov 1 00:12:59.006216 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:12:59.006224 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:12:59.006231 kernel: ima: No architecture policies found Nov 1 00:12:59.006239 kernel: clk: Disabling unused clocks Nov 1 00:12:59.006246 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:12:59.006254 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:12:59.006262 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:12:59.006270 kernel: Run /init as init process Nov 1 00:12:59.006277 kernel: with arguments: Nov 1 00:12:59.006288 kernel: /init Nov 1 00:12:59.006295 kernel: with environment: Nov 1 00:12:59.006303 kernel: HOME=/ Nov 1 00:12:59.006310 kernel: TERM=linux Nov 1 00:12:59.006320 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:12:59.006330 systemd[1]: Detected virtualization kvm. Nov 1 00:12:59.006339 systemd[1]: Detected architecture x86-64. Nov 1 00:12:59.006347 systemd[1]: Running in initrd. Nov 1 00:12:59.006358 systemd[1]: No hostname configured, using default hostname. Nov 1 00:12:59.006366 systemd[1]: Hostname set to . Nov 1 00:12:59.006375 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:12:59.006383 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:12:59.006391 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:12:59.006399 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:12:59.006408 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:12:59.006417 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:12:59.006428 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:12:59.006450 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:12:59.006465 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:12:59.006474 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:12:59.006485 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:12:59.006494 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:12:59.006502 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:12:59.006511 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:12:59.006519 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:12:59.006528 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:12:59.006536 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:12:59.006545 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:12:59.006554 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:12:59.006565 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:12:59.006573 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:12:59.006582 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:12:59.006590 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:12:59.006599 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:12:59.006607 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:12:59.006616 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:12:59.006624 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:12:59.006635 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:12:59.006644 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:12:59.006652 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:12:59.006660 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:12:59.006671 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:12:59.006682 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:12:59.006709 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:12:59.006749 systemd-journald[193]: Collecting audit messages is disabled. Nov 1 00:12:59.006770 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:12:59.006782 systemd-journald[193]: Journal started Nov 1 00:12:59.006800 systemd-journald[193]: Runtime Journal (/run/log/journal/8540e67f97a1488b9051be7627ac6fa5) is 6.0M, max 48.4M, 42.3M free. Nov 1 00:12:59.007724 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:12:59.009087 systemd-modules-load[194]: Inserted module 'overlay' Nov 1 00:12:59.078192 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:12:59.078217 kernel: Bridge firewalling registered Nov 1 00:12:59.038216 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 1 00:12:59.084118 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:12:59.087966 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:12:59.092055 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:12:59.105944 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:12:59.106991 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:12:59.108426 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:12:59.112883 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:12:59.130683 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:12:59.131747 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:12:59.140786 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:12:59.155950 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:12:59.156357 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:12:59.162830 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:12:59.185658 dracut-cmdline[232]: dracut-dracut-053 Nov 1 00:12:59.190853 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:12:59.200227 systemd-resolved[228]: Positive Trust Anchors: Nov 1 00:12:59.200246 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:12:59.200277 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:12:59.203134 systemd-resolved[228]: Defaulting to hostname 'linux'. Nov 1 00:12:59.204406 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:12:59.206426 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:12:59.296738 kernel: SCSI subsystem initialized Nov 1 00:12:59.307717 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:12:59.318722 kernel: iscsi: registered transport (tcp) Nov 1 00:12:59.343034 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:12:59.343065 kernel: QLogic iSCSI HBA Driver Nov 1 00:12:59.400205 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:12:59.412928 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:12:59.440101 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:12:59.440204 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:12:59.441823 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:12:59.484756 kernel: raid6: avx2x4 gen() 29354 MB/s Nov 1 00:12:59.501736 kernel: raid6: avx2x2 gen() 29689 MB/s Nov 1 00:12:59.519559 kernel: raid6: avx2x1 gen() 25186 MB/s Nov 1 00:12:59.519671 kernel: raid6: using algorithm avx2x2 gen() 29689 MB/s Nov 1 00:12:59.537516 kernel: raid6: .... xor() 18764 MB/s, rmw enabled Nov 1 00:12:59.537583 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:12:59.565728 kernel: xor: automatically using best checksumming function avx Nov 1 00:12:59.753736 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:12:59.768122 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:12:59.786951 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:12:59.798897 systemd-udevd[415]: Using default interface naming scheme 'v255'. Nov 1 00:12:59.803801 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:12:59.827891 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:12:59.842570 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Nov 1 00:12:59.876961 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:12:59.890863 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:12:59.964598 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:12:59.979885 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:12:59.997827 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:13:00.004766 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:13:00.009230 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:13:00.014786 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:13:00.031000 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:13:00.037129 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 1 00:13:00.037358 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:13:00.037372 kernel: libata version 3.00 loaded. Nov 1 00:13:00.046215 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 1 00:13:00.046502 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:13:00.051835 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:13:00.051864 kernel: GPT:9289727 != 19775487 Nov 1 00:13:00.051875 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:13:00.051886 kernel: GPT:9289727 != 19775487 Nov 1 00:13:00.051895 kernel: AES CTR mode by8 optimization enabled Nov 1 00:13:00.052718 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:13:00.053229 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:13:00.062918 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:13:00.053474 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:13:00.088207 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:13:00.093307 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:13:00.093515 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:13:00.097162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:13:00.104580 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:13:00.104839 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:13:00.105003 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Nov 1 00:13:00.099419 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:13:00.110569 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:13:00.120794 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (460) Nov 1 00:13:00.121908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:13:00.129241 kernel: scsi host0: ahci Nov 1 00:13:00.125973 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:13:00.133952 kernel: scsi host1: ahci Nov 1 00:13:00.136008 kernel: scsi host2: ahci Nov 1 00:13:00.136252 kernel: scsi host3: ahci Nov 1 00:13:00.139706 kernel: scsi host4: ahci Nov 1 00:13:00.139891 kernel: scsi host5: ahci Nov 1 00:13:00.141515 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 1 00:13:00.143990 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 1 00:13:00.144041 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 1 00:13:00.149718 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 1 00:13:00.149751 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 1 00:13:00.149771 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 1 00:13:00.157446 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 00:13:00.232444 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:13:00.246222 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 00:13:00.256245 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:13:00.264236 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 1 00:13:00.318934 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 00:13:00.337926 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:13:00.343245 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:13:00.369976 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:13:00.460740 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:13:00.460824 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:13:00.461755 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 00:13:00.463720 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:13:00.464725 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:13:00.465718 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:13:00.469036 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 00:13:00.469061 kernel: ata3.00: applying bridge limits Nov 1 00:13:00.469153 kernel: ata3.00: configured for UDMA/100 Nov 1 00:13:00.470720 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 00:13:00.519385 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 00:13:00.519810 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:13:00.533738 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 00:13:00.624488 disk-uuid[558]: Primary Header is updated. Nov 1 00:13:00.624488 disk-uuid[558]: Secondary Entries is updated. Nov 1 00:13:00.624488 disk-uuid[558]: Secondary Header is updated. Nov 1 00:13:00.632719 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:13:00.638771 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:13:01.641746 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:13:01.642791 disk-uuid[579]: The operation has completed successfully. Nov 1 00:13:01.696595 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:13:01.696826 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:13:01.714896 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:13:01.722751 sh[592]: Success Nov 1 00:13:01.738865 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:13:01.777336 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:13:01.787569 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:13:01.790743 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:13:01.806259 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:13:01.806324 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:13:01.806349 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:13:01.807974 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:13:01.810277 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:13:01.814868 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:13:01.815581 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:13:01.827865 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:13:01.830264 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:13:01.846743 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:13:01.846796 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:13:01.846808 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:13:01.850721 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:13:01.862092 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:13:01.864979 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:13:01.935012 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:13:01.944998 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:13:01.968292 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:13:01.983092 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:13:02.014367 systemd-networkd[774]: lo: Link UP Nov 1 00:13:02.014382 systemd-networkd[774]: lo: Gained carrier Nov 1 00:13:02.017186 systemd-networkd[774]: Enumeration completed Nov 1 00:13:02.017311 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:13:02.018132 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:13:02.018138 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:13:02.019466 systemd-networkd[774]: eth0: Link UP Nov 1 00:13:02.019471 systemd-networkd[774]: eth0: Gained carrier Nov 1 00:13:02.033049 ignition[756]: Ignition 2.19.0 Nov 1 00:13:02.019479 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:13:02.033058 ignition[756]: Stage: fetch-offline Nov 1 00:13:02.021647 systemd[1]: Reached target network.target - Network. Nov 1 00:13:02.033112 ignition[756]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:13:02.043816 systemd-networkd[774]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:13:02.033127 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:13:02.033251 ignition[756]: parsed url from cmdline: "" Nov 1 00:13:02.033256 ignition[756]: no config URL provided Nov 1 00:13:02.033262 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:13:02.033273 ignition[756]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:13:02.033304 ignition[756]: op(1): [started] loading QEMU firmware config module Nov 1 00:13:02.033310 ignition[756]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 00:13:02.053988 ignition[756]: op(1): [finished] loading QEMU firmware config module Nov 1 00:13:02.140559 ignition[756]: parsing config with SHA512: 7be6f18b94aa9ad05264bf741c8f6d7dffbf74de22192f8ff8b89b8a415e7eac4e44dfae7a9422890d4b9fc3f6a52856e3d11242333ae0b035bc3f03632d122f Nov 1 00:13:02.150359 unknown[756]: fetched base config from "system" Nov 1 00:13:02.150386 unknown[756]: fetched user config from "qemu" Nov 1 00:13:02.151855 ignition[756]: fetch-offline: fetch-offline passed Nov 1 00:13:02.152039 ignition[756]: Ignition finished successfully Nov 1 00:13:02.159505 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:13:02.159915 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:13:02.183885 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:13:02.217627 ignition[785]: Ignition 2.19.0 Nov 1 00:13:02.217640 ignition[785]: Stage: kargs Nov 1 00:13:02.217922 ignition[785]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:13:02.217937 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:13:02.219314 ignition[785]: kargs: kargs passed Nov 1 00:13:02.219381 ignition[785]: Ignition finished successfully Nov 1 00:13:02.229372 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:13:02.245917 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:13:02.264371 ignition[793]: Ignition 2.19.0 Nov 1 00:13:02.264384 ignition[793]: Stage: disks Nov 1 00:13:02.264573 ignition[793]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:13:02.264586 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:13:02.268460 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:13:02.265618 ignition[793]: disks: disks passed Nov 1 00:13:02.269330 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:13:02.265666 ignition[793]: Ignition finished successfully Nov 1 00:13:02.270061 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:13:02.278790 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:13:02.280328 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:13:02.287951 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:13:02.301967 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:13:02.323429 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:13:02.357252 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:13:02.373905 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:13:02.489730 kernel: EXT4-fs (vda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:13:02.491401 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:13:02.494123 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:13:02.512975 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:13:02.516032 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:13:02.524496 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Nov 1 00:13:02.518995 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:13:02.519040 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:13:02.538493 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:13:02.538515 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:13:02.538527 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:13:02.538538 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:13:02.519078 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:13:02.531219 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:13:02.539651 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:13:02.544139 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:13:02.591170 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:13:02.598616 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:13:02.605376 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:13:02.609884 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:13:02.722214 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:13:02.760877 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:13:02.766476 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:13:02.772735 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:13:02.797228 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:13:02.804846 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:13:02.985101 ignition[928]: INFO : Ignition 2.19.0 Nov 1 00:13:02.985101 ignition[928]: INFO : Stage: mount Nov 1 00:13:02.994377 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:13:02.994377 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:13:02.994377 ignition[928]: INFO : mount: mount passed Nov 1 00:13:02.994377 ignition[928]: INFO : Ignition finished successfully Nov 1 00:13:03.004880 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:13:03.020894 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:13:03.027961 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:13:03.048907 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Nov 1 00:13:03.048952 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:13:03.048972 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:13:03.050383 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:13:03.054726 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:13:03.056193 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:13:03.059821 systemd-networkd[774]: eth0: Gained IPv6LL Nov 1 00:13:03.085629 ignition[954]: INFO : Ignition 2.19.0 Nov 1 00:13:03.085629 ignition[954]: INFO : Stage: files Nov 1 00:13:03.106360 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:13:03.106360 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:13:03.106360 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:13:03.106360 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:13:03.106360 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:13:03.117075 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:13:03.117075 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:13:03.121798 unknown[954]: wrote ssh authorized keys file for user: core Nov 1 00:13:03.123620 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:13:03.126597 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:13:03.129847 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:13:03.185112 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:13:03.353023 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:13:03.353023 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:13:03.359016 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:13:03.361819 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:13:03.365208 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:13:03.368213 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:13:03.371106 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:13:03.373904 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:13:03.376890 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:13:03.380284 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:13:03.383270 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:13:03.386138 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:13:03.390309 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:13:03.394865 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:13:03.398552 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 00:13:03.844598 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:13:04.587429 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:13:04.587429 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:13:04.594938 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:13:04.594938 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:13:04.594938 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:13:04.594938 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 1 00:13:04.594938 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:13:04.594938 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:13:04.594938 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 1 00:13:04.594938 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:13:04.665885 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:13:04.675832 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:13:04.679163 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:13:04.679163 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:13:04.679163 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:13:04.679163 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:13:04.679163 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:13:04.679163 ignition[954]: INFO : files: files passed Nov 1 00:13:04.679163 ignition[954]: INFO : Ignition finished successfully Nov 1 00:13:04.698480 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:13:04.716130 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:13:04.720398 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:13:04.723119 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:13:04.723262 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:13:04.737380 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Nov 1 00:13:04.740676 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:13:04.740676 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:13:04.747215 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:13:04.752115 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:13:04.752521 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:13:04.768048 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:13:04.813639 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:13:04.813826 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:13:04.818299 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:13:04.822498 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:13:04.822623 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:13:04.833174 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:13:04.858896 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:13:04.864920 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:13:04.881637 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:13:04.881824 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:13:04.950077 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:13:04.955602 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:13:04.955826 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:13:04.963900 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:13:04.964107 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:13:05.021587 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:13:05.023397 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:13:05.027646 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:13:05.034419 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:13:05.037156 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:13:05.040107 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:13:05.046224 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:13:05.051185 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:13:05.052930 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:13:05.053113 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:13:05.060602 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:13:05.060820 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:13:05.064367 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:13:05.068331 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:13:05.070068 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:13:05.070220 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:13:05.079461 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:13:05.079628 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:13:05.083530 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:13:05.109394 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:13:05.114759 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:13:05.115027 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:13:05.121045 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:13:05.122532 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:13:05.122740 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:13:05.127236 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:13:05.127427 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:13:05.128789 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:13:05.129015 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:13:05.129751 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:13:05.129957 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:13:05.146882 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:13:05.146990 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:13:05.147213 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:13:05.151767 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:13:05.155555 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:13:05.155811 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:13:05.157859 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:13:05.158071 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:13:05.177032 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:13:05.177191 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:13:05.208079 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:13:05.210463 ignition[1009]: INFO : Ignition 2.19.0 Nov 1 00:13:05.210463 ignition[1009]: INFO : Stage: umount Nov 1 00:13:05.213589 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:13:05.213589 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:13:05.219087 ignition[1009]: INFO : umount: umount passed Nov 1 00:13:05.220678 ignition[1009]: INFO : Ignition finished successfully Nov 1 00:13:05.225170 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:13:05.225307 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:13:05.227517 systemd[1]: Stopped target network.target - Network. Nov 1 00:13:05.232870 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:13:05.232932 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:13:05.243443 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:13:05.243497 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:13:05.245266 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:13:05.245317 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:13:05.246029 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:13:05.246085 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:13:05.254794 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:13:05.258639 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:13:05.272261 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:13:05.272426 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:13:05.303954 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:13:05.304030 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:13:05.313746 systemd-networkd[774]: eth0: DHCPv6 lease lost Nov 1 00:13:05.316381 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:13:05.318091 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:13:05.322074 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:13:05.322127 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:13:05.337786 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:13:05.341026 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:13:05.341104 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:13:05.343345 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:13:05.345126 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:13:05.349310 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:13:05.349365 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:13:05.357573 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:13:05.369804 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:13:05.370020 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:13:05.379835 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:13:05.380075 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:13:05.398003 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:13:05.398082 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:13:05.402096 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:13:05.402156 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:13:05.405644 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:13:05.405741 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:13:05.409293 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:13:05.409360 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:13:05.412750 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:13:05.412818 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:13:05.424946 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:13:05.426951 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:13:05.427041 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:13:05.431349 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:13:05.431412 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:13:05.435645 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:13:05.435791 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:13:05.778286 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:13:05.778439 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:13:05.778844 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:13:05.804025 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:13:05.804089 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:13:05.826839 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:13:05.835925 systemd[1]: Switching root. Nov 1 00:13:05.865194 systemd-journald[193]: Journal stopped Nov 1 00:13:07.617229 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 1 00:13:07.617313 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:13:07.617337 kernel: SELinux: policy capability open_perms=1 Nov 1 00:13:07.617351 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:13:07.617363 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:13:07.617381 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:13:07.617393 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:13:07.617405 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:13:07.617416 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:13:07.617428 kernel: audit: type=1403 audit(1761955986.582:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:13:07.617446 systemd[1]: Successfully loaded SELinux policy in 47.182ms. Nov 1 00:13:07.617464 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.797ms. Nov 1 00:13:07.617478 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:13:07.617490 systemd[1]: Detected virtualization kvm. Nov 1 00:13:07.617502 systemd[1]: Detected architecture x86-64. Nov 1 00:13:07.617514 systemd[1]: Detected first boot. Nov 1 00:13:07.617526 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:13:07.617539 zram_generator::config[1053]: No configuration found. Nov 1 00:13:07.617563 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:13:07.617584 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:13:07.617603 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 00:13:07.617615 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:13:07.617629 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:13:07.617645 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:13:07.617664 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:13:07.617679 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:13:07.617829 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:13:07.617844 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:13:07.617861 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:13:07.617873 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:13:07.617885 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:13:07.617898 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:13:07.617910 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:13:07.617923 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:13:07.617947 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:13:07.617963 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:13:07.617980 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:13:07.618001 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:13:07.618019 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 00:13:07.618033 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 00:13:07.618046 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 00:13:07.618061 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:13:07.618074 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:13:07.618086 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:13:07.618101 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:13:07.618116 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:13:07.618128 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:13:07.618140 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:13:07.618152 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:13:07.618169 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:13:07.618182 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:13:07.618194 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:13:07.618212 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:13:07.618225 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:13:07.618240 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:13:07.618252 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:13:07.618266 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:13:07.618281 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:13:07.618300 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:13:07.618314 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:13:07.618326 systemd[1]: Reached target machines.target - Containers. Nov 1 00:13:07.618338 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:13:07.618357 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:13:07.618376 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:13:07.618393 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:13:07.618406 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:13:07.618420 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:13:07.618432 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:13:07.618445 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:13:07.618457 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:13:07.618470 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:13:07.618486 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:13:07.618498 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 00:13:07.618510 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:13:07.618523 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:13:07.618535 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:13:07.618547 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:13:07.618559 kernel: loop: module loaded Nov 1 00:13:07.618593 systemd-journald[1116]: Collecting audit messages is disabled. Nov 1 00:13:07.618621 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:13:07.618638 systemd-journald[1116]: Journal started Nov 1 00:13:07.618668 systemd-journald[1116]: Runtime Journal (/run/log/journal/8540e67f97a1488b9051be7627ac6fa5) is 6.0M, max 48.4M, 42.3M free. Nov 1 00:13:07.292905 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:13:07.309685 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 00:13:07.310290 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:13:07.625714 kernel: fuse: init (API version 7.39) Nov 1 00:13:07.676019 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:13:07.681734 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:13:07.686059 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:13:07.686141 systemd[1]: Stopped verity-setup.service. Nov 1 00:13:07.690090 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:13:07.694729 kernel: ACPI: bus type drm_connector registered Nov 1 00:13:07.694798 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:13:07.718055 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:13:07.720353 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:13:07.722736 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:13:07.724863 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:13:07.727319 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:13:07.729849 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:13:07.732382 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:13:07.735600 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:13:07.735957 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:13:07.738802 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:13:07.739076 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:13:07.741650 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:13:07.741930 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:13:07.744722 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:13:07.744975 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:13:07.747966 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:13:07.748219 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:13:07.751418 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:13:07.751679 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:13:07.754502 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:13:07.757444 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:13:07.760431 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:13:07.780447 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:13:07.789871 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:13:07.837970 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:13:07.840382 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:13:07.840443 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:13:07.844128 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:13:07.848448 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:13:07.852635 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:13:07.855039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:13:07.857422 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:13:07.861191 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:13:07.863491 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:13:07.866046 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:13:07.868604 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:13:07.874990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:13:07.885649 systemd-journald[1116]: Time spent on flushing to /var/log/journal/8540e67f97a1488b9051be7627ac6fa5 is 14.571ms for 945 entries. Nov 1 00:13:07.885649 systemd-journald[1116]: System Journal (/var/log/journal/8540e67f97a1488b9051be7627ac6fa5) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:13:08.698255 systemd-journald[1116]: Received client request to flush runtime journal. Nov 1 00:13:08.698329 kernel: loop0: detected capacity change from 0 to 219144 Nov 1 00:13:08.698365 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:13:08.698393 kernel: loop1: detected capacity change from 0 to 142488 Nov 1 00:13:08.698416 kernel: loop2: detected capacity change from 0 to 140768 Nov 1 00:13:08.698432 kernel: loop3: detected capacity change from 0 to 219144 Nov 1 00:13:08.698451 kernel: loop4: detected capacity change from 0 to 142488 Nov 1 00:13:08.698473 kernel: loop5: detected capacity change from 0 to 140768 Nov 1 00:13:08.698492 zram_generator::config[1205]: No configuration found. Nov 1 00:13:07.897925 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:13:07.913974 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:13:07.916974 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:13:07.919556 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:13:07.922664 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:13:07.948956 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:13:07.978024 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:13:08.017080 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:13:08.488245 (sd-merge)[1172]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 1 00:13:08.488958 (sd-merge)[1172]: Merged extensions into '/usr'. Nov 1 00:13:08.494672 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:13:08.494700 systemd[1]: Reloading... Nov 1 00:13:08.710654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:13:08.769804 systemd[1]: Reloading finished in 274 ms. Nov 1 00:13:08.773839 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:13:08.806031 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:13:08.847007 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:13:08.849424 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:13:08.851757 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:13:08.860349 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:13:08.872974 systemd[1]: Starting ensure-sysext.service... Nov 1 00:13:08.914716 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:13:08.918411 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:13:08.922037 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:13:08.922054 systemd[1]: Reloading... Nov 1 00:13:08.988744 zram_generator::config[1274]: No configuration found. Nov 1 00:13:09.116360 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:13:09.169252 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:13:09.169498 systemd[1]: Reloading finished in 247 ms. Nov 1 00:13:09.196789 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:13:09.199130 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:13:09.211290 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:13:09.213718 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:13:09.238071 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:13:09.241491 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:13:09.245366 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:13:09.245730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:13:09.247114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:13:09.251403 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:13:09.256660 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:13:09.260848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:13:09.261207 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:13:09.262857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:13:09.263075 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:13:09.266034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:13:09.266216 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:13:09.268912 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:13:09.269098 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:13:09.269605 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:13:09.270325 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:13:09.271732 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:13:09.271791 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Nov 1 00:13:09.271806 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Nov 1 00:13:09.272028 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Nov 1 00:13:09.272114 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Nov 1 00:13:09.276427 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:13:09.276441 systemd-tmpfiles[1316]: Skipping /boot Nov 1 00:13:09.276654 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:13:09.276858 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:13:09.280920 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:13:09.284006 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:13:09.287340 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:13:09.288523 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:13:09.288534 systemd-tmpfiles[1316]: Skipping /boot Nov 1 00:13:09.289403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:13:09.289518 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:13:09.290674 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:13:09.293866 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:13:09.294070 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:13:09.298757 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:13:09.298949 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:13:09.304943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:13:09.305199 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:13:09.310795 systemd[1]: Finished ensure-sysext.service. Nov 1 00:13:09.314381 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:13:09.314854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:13:09.319890 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:13:09.322810 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:13:09.323017 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:13:09.323076 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:13:09.324509 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:13:09.326393 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:13:09.326851 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:13:09.333144 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:13:09.336276 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:13:09.341845 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:13:09.348856 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:13:09.361911 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:13:09.365230 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Nov 1 00:13:09.367842 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:13:09.371154 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:13:09.371343 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:13:09.373883 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:13:09.374068 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:13:09.382214 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:13:09.391909 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:13:09.393741 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:13:09.396233 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:13:09.412366 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:13:09.415789 augenrules[1369]: No rules Nov 1 00:13:09.416292 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:13:09.418989 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:13:09.421738 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:13:09.441080 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:13:09.444790 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:13:09.452148 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:13:09.457841 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:13:09.485114 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 00:13:09.505763 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1381) Nov 1 00:13:09.557743 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:13:09.564368 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:13:09.568113 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:13:09.574757 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:13:09.579336 systemd-networkd[1367]: lo: Link UP Nov 1 00:13:09.579349 systemd-networkd[1367]: lo: Gained carrier Nov 1 00:13:09.580284 systemd-resolved[1341]: Positive Trust Anchors: Nov 1 00:13:09.580307 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:13:09.580339 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:13:09.581475 systemd-networkd[1367]: Enumeration completed Nov 1 00:13:09.582074 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:13:09.582869 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:13:09.582878 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:13:09.583997 systemd-networkd[1367]: eth0: Link UP Nov 1 00:13:09.584006 systemd-networkd[1367]: eth0: Gained carrier Nov 1 00:13:09.584019 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:13:09.587989 systemd-resolved[1341]: Defaulting to hostname 'linux'. Nov 1 00:13:09.591062 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:13:09.592723 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:13:09.598291 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 00:13:09.598565 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:13:09.598772 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:13:09.602431 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:13:09.606941 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:13:09.614389 systemd-networkd[1367]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:13:09.616088 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Nov 1 00:13:09.616526 systemd[1]: Reached target network.target - Network. Nov 1 00:13:09.618238 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:13:10.738940 systemd-resolved[1341]: Clock change detected. Flushing caches. Nov 1 00:13:10.739037 systemd-timesyncd[1343]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 00:13:10.739093 systemd-timesyncd[1343]: Initial clock synchronization to Sat 2025-11-01 00:13:10.738864 UTC. Nov 1 00:13:10.744883 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:13:10.767137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:13:10.775448 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:13:10.983718 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:13:10.999725 kernel: kvm_amd: TSC scaling supported Nov 1 00:13:10.999845 kernel: kvm_amd: Nested Virtualization enabled Nov 1 00:13:10.999865 kernel: kvm_amd: Nested Paging enabled Nov 1 00:13:10.999878 kernel: kvm_amd: LBR virtualization supported Nov 1 00:13:10.999903 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 1 00:13:11.001753 kernel: kvm_amd: Virtual GIF supported Nov 1 00:13:11.041714 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:13:11.079523 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:13:11.138932 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:13:11.142028 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:13:11.153995 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:13:11.196476 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:13:11.199313 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:13:11.201546 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:13:11.203744 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:13:11.206146 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:13:11.208990 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:13:11.211238 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:13:11.213529 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:13:11.216006 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:13:11.216048 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:13:11.217820 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:13:11.220415 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:13:11.224390 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:13:11.236449 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:13:11.239980 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:13:11.242637 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:13:11.244837 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:13:11.246678 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:13:11.246846 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:13:11.246886 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:13:11.248795 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:13:11.251910 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:13:11.255946 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:13:11.256835 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:13:11.261149 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:13:11.263414 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:13:11.266219 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:13:11.268116 jq[1431]: false Nov 1 00:13:11.270078 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:13:11.276610 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:13:11.281002 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:13:11.290296 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:13:11.290675 extend-filesystems[1432]: Found loop3 Nov 1 00:13:11.294347 extend-filesystems[1432]: Found loop4 Nov 1 00:13:11.294347 extend-filesystems[1432]: Found loop5 Nov 1 00:13:11.294347 extend-filesystems[1432]: Found sr0 Nov 1 00:13:11.294347 extend-filesystems[1432]: Found vda Nov 1 00:13:11.294347 extend-filesystems[1432]: Found vda1 Nov 1 00:13:11.294347 extend-filesystems[1432]: Found vda2 Nov 1 00:13:11.294347 extend-filesystems[1432]: Found vda3 Nov 1 00:13:11.294347 extend-filesystems[1432]: Found usr Nov 1 00:13:11.294347 extend-filesystems[1432]: Found vda4 Nov 1 00:13:11.294347 extend-filesystems[1432]: Found vda6 Nov 1 00:13:11.294347 extend-filesystems[1432]: Found vda7 Nov 1 00:13:11.294347 extend-filesystems[1432]: Found vda9 Nov 1 00:13:11.294347 extend-filesystems[1432]: Checking size of /dev/vda9 Nov 1 00:13:11.324042 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 1 00:13:11.324099 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1376) Nov 1 00:13:11.297231 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:13:11.324253 extend-filesystems[1432]: Resized partition /dev/vda9 Nov 1 00:13:11.304571 dbus-daemon[1430]: [system] SELinux support is enabled Nov 1 00:13:11.297975 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:13:11.326654 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:13:11.312269 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:13:11.331092 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:13:11.337089 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:13:11.349970 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:13:11.353737 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 1 00:13:11.356014 jq[1448]: true Nov 1 00:13:11.365838 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:13:11.366213 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:13:11.368253 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:13:11.370777 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:13:11.376199 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:13:11.376762 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:13:11.387948 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:13:11.387948 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 00:13:11.387948 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 1 00:13:11.562479 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Nov 1 00:13:11.390388 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:13:11.571972 update_engine[1443]: I20251101 00:13:11.571465 1443 main.cc:92] Flatcar Update Engine starting Nov 1 00:13:11.390727 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:13:11.573241 update_engine[1443]: I20251101 00:13:11.573195 1443 update_check_scheduler.cc:74] Next update check in 5m26s Nov 1 00:13:11.579769 jq[1457]: true Nov 1 00:13:11.586166 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:13:11.594583 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:13:11.594605 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:13:11.595477 systemd-logind[1438]: New seat seat0. Nov 1 00:13:11.596572 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:13:11.604783 tar[1456]: linux-amd64/LICENSE Nov 1 00:13:11.607948 tar[1456]: linux-amd64/helm Nov 1 00:13:11.607668 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 00:13:11.618813 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:13:11.621818 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:13:11.622018 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:13:11.624408 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:13:11.624536 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:13:11.638451 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:13:11.706555 sshd_keygen[1446]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:13:11.726493 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:13:11.752750 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:13:11.790253 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:13:11.798833 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:13:11.799099 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:13:11.802918 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:13:11.855883 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:13:11.857894 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:13:11.864004 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 1 00:13:11.938071 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:13:11.952804 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:13:11.956596 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:13:11.959424 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:13:12.148407 containerd[1460]: time="2025-11-01T00:13:12.148236444Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:13:12.181270 containerd[1460]: time="2025-11-01T00:13:12.181176118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:13:12.183949 containerd[1460]: time="2025-11-01T00:13:12.183897141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:13:12.183949 containerd[1460]: time="2025-11-01T00:13:12.183943478Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:13:12.184018 containerd[1460]: time="2025-11-01T00:13:12.183964998Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:13:12.184234 containerd[1460]: time="2025-11-01T00:13:12.184210198Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:13:12.184282 containerd[1460]: time="2025-11-01T00:13:12.184235155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:13:12.184413 containerd[1460]: time="2025-11-01T00:13:12.184383152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:13:12.184413 containerd[1460]: time="2025-11-01T00:13:12.184402909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:13:12.184669 containerd[1460]: time="2025-11-01T00:13:12.184640946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:13:12.184669 containerd[1460]: time="2025-11-01T00:13:12.184663117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:13:12.184737 containerd[1460]: time="2025-11-01T00:13:12.184677905Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:13:12.184737 containerd[1460]: time="2025-11-01T00:13:12.184705076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:13:12.184875 containerd[1460]: time="2025-11-01T00:13:12.184840500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:13:12.185195 containerd[1460]: time="2025-11-01T00:13:12.185166281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:13:12.185334 containerd[1460]: time="2025-11-01T00:13:12.185305672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:13:12.185334 containerd[1460]: time="2025-11-01T00:13:12.185326231Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:13:12.185469 containerd[1460]: time="2025-11-01T00:13:12.185444603Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:13:12.185548 containerd[1460]: time="2025-11-01T00:13:12.185524743Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:13:12.193172 containerd[1460]: time="2025-11-01T00:13:12.193107173Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:13:12.193172 containerd[1460]: time="2025-11-01T00:13:12.193189026Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:13:12.193321 containerd[1460]: time="2025-11-01T00:13:12.193207892Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:13:12.193321 containerd[1460]: time="2025-11-01T00:13:12.193234862Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:13:12.193321 containerd[1460]: time="2025-11-01T00:13:12.193252605Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:13:12.193482 containerd[1460]: time="2025-11-01T00:13:12.193440818Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:13:12.193762 containerd[1460]: time="2025-11-01T00:13:12.193730351Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:13:12.193932 containerd[1460]: time="2025-11-01T00:13:12.193888248Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:13:12.193932 containerd[1460]: time="2025-11-01T00:13:12.193909257Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:13:12.193932 containerd[1460]: time="2025-11-01T00:13:12.193923323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:13:12.193932 containerd[1460]: time="2025-11-01T00:13:12.193937069Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:13:12.194082 containerd[1460]: time="2025-11-01T00:13:12.193959371Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:13:12.194082 containerd[1460]: time="2025-11-01T00:13:12.193982264Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:13:12.194082 containerd[1460]: time="2025-11-01T00:13:12.193997402Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:13:12.194082 containerd[1460]: time="2025-11-01T00:13:12.194027238Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:13:12.194082 containerd[1460]: time="2025-11-01T00:13:12.194040723Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:13:12.194082 containerd[1460]: time="2025-11-01T00:13:12.194052606Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:13:12.194082 containerd[1460]: time="2025-11-01T00:13:12.194064057Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:13:12.194263 containerd[1460]: time="2025-11-01T00:13:12.194095817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194263 containerd[1460]: time="2025-11-01T00:13:12.194116255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194263 containerd[1460]: time="2025-11-01T00:13:12.194132405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194263 containerd[1460]: time="2025-11-01T00:13:12.194149477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194263 containerd[1460]: time="2025-11-01T00:13:12.194165327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194263 containerd[1460]: time="2025-11-01T00:13:12.194181237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194263 containerd[1460]: time="2025-11-01T00:13:12.194198539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194263 containerd[1460]: time="2025-11-01T00:13:12.194215521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194263 containerd[1460]: time="2025-11-01T00:13:12.194231541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194263 containerd[1460]: time="2025-11-01T00:13:12.194250276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194263 containerd[1460]: time="2025-11-01T00:13:12.194265635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194543 containerd[1460]: time="2025-11-01T00:13:12.194288648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194543 containerd[1460]: time="2025-11-01T00:13:12.194304668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194543 containerd[1460]: time="2025-11-01T00:13:12.194325207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:13:12.194543 containerd[1460]: time="2025-11-01T00:13:12.194358139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194543 containerd[1460]: time="2025-11-01T00:13:12.194374600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194543 containerd[1460]: time="2025-11-01T00:13:12.194388936Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:13:12.194543 containerd[1460]: time="2025-11-01T00:13:12.194460150Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:13:12.194543 containerd[1460]: time="2025-11-01T00:13:12.194485197Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:13:12.194543 containerd[1460]: time="2025-11-01T00:13:12.194500967Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:13:12.194543 containerd[1460]: time="2025-11-01T00:13:12.194519141Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:13:12.194543 containerd[1460]: time="2025-11-01T00:13:12.194533618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.194850 containerd[1460]: time="2025-11-01T00:13:12.194551752Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:13:12.194850 containerd[1460]: time="2025-11-01T00:13:12.194567972Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:13:12.194850 containerd[1460]: time="2025-11-01T00:13:12.194581738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:13:12.195093 containerd[1460]: time="2025-11-01T00:13:12.195002708Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:13:12.195093 containerd[1460]: time="2025-11-01T00:13:12.195092766Z" level=info msg="Connect containerd service" Nov 1 00:13:12.195480 containerd[1460]: time="2025-11-01T00:13:12.195145505Z" level=info msg="using legacy CRI server" Nov 1 00:13:12.195480 containerd[1460]: time="2025-11-01T00:13:12.195163459Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:13:12.195480 containerd[1460]: time="2025-11-01T00:13:12.195288323Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:13:12.196164 containerd[1460]: time="2025-11-01T00:13:12.196124141Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:13:12.196549 containerd[1460]: time="2025-11-01T00:13:12.196319517Z" level=info msg="Start subscribing containerd event" Nov 1 00:13:12.196549 containerd[1460]: time="2025-11-01T00:13:12.196521245Z" level=info msg="Start recovering state" Nov 1 00:13:12.196649 containerd[1460]: time="2025-11-01T00:13:12.196609421Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:13:12.196649 containerd[1460]: time="2025-11-01T00:13:12.196616023Z" level=info msg="Start event monitor" Nov 1 00:13:12.196649 containerd[1460]: time="2025-11-01T00:13:12.196643775Z" level=info msg="Start snapshots syncer" Nov 1 00:13:12.196721 containerd[1460]: time="2025-11-01T00:13:12.196659054Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:13:12.196721 containerd[1460]: time="2025-11-01T00:13:12.196669684Z" level=info msg="Start streaming server" Nov 1 00:13:12.196909 containerd[1460]: time="2025-11-01T00:13:12.196659925Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:13:12.196940 containerd[1460]: time="2025-11-01T00:13:12.196918150Z" level=info msg="containerd successfully booted in 0.050531s" Nov 1 00:13:12.197053 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:13:12.317305 tar[1456]: linux-amd64/README.md Nov 1 00:13:12.337010 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:13:12.689972 systemd-networkd[1367]: eth0: Gained IPv6LL Nov 1 00:13:12.694084 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:13:12.696891 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:13:12.709038 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 1 00:13:12.712603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:13:12.716760 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:13:12.746277 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 1 00:13:12.746624 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 1 00:13:12.749857 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:13:12.754360 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:13:13.921379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:13:13.924101 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:13:13.926314 systemd[1]: Startup finished in 1.409s (kernel) + 7.839s (initrd) + 6.272s (userspace) = 15.520s. Nov 1 00:13:13.927070 (kubelet)[1543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:13:14.054007 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:13:14.067383 systemd[1]: Started sshd@0-10.0.0.19:22-10.0.0.1:41222.service - OpenSSH per-connection server daemon (10.0.0.1:41222). Nov 1 00:13:14.130711 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 41222 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:13:14.133765 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:13:14.144212 systemd-logind[1438]: New session 1 of user core. Nov 1 00:13:14.145584 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:13:14.158022 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:13:14.199153 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:13:14.209151 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:13:14.213167 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:13:14.351655 systemd[1558]: Queued start job for default target default.target. Nov 1 00:13:14.362185 systemd[1558]: Created slice app.slice - User Application Slice. Nov 1 00:13:14.362213 systemd[1558]: Reached target paths.target - Paths. Nov 1 00:13:14.362227 systemd[1558]: Reached target timers.target - Timers. Nov 1 00:13:14.364040 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:13:14.378713 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:13:14.378897 systemd[1558]: Reached target sockets.target - Sockets. Nov 1 00:13:14.378920 systemd[1558]: Reached target basic.target - Basic System. Nov 1 00:13:14.378969 systemd[1558]: Reached target default.target - Main User Target. Nov 1 00:13:14.379014 systemd[1558]: Startup finished in 154ms. Nov 1 00:13:14.379463 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:13:14.386843 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:13:14.452323 systemd[1]: Started sshd@1-10.0.0.19:22-10.0.0.1:41234.service - OpenSSH per-connection server daemon (10.0.0.1:41234). Nov 1 00:13:14.499135 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 41234 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:13:14.501762 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:13:14.508700 systemd-logind[1438]: New session 2 of user core. Nov 1 00:13:14.518115 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:13:14.546017 kubelet[1543]: E1101 00:13:14.545959 1543 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:13:14.551535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:13:14.551783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:13:14.552190 systemd[1]: kubelet.service: Consumed 1.650s CPU time. Nov 1 00:13:14.594958 sshd[1570]: pam_unix(sshd:session): session closed for user core Nov 1 00:13:14.606183 systemd[1]: sshd@1-10.0.0.19:22-10.0.0.1:41234.service: Deactivated successfully. Nov 1 00:13:14.608940 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:13:14.612124 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:13:14.623075 systemd[1]: Started sshd@2-10.0.0.19:22-10.0.0.1:41238.service - OpenSSH per-connection server daemon (10.0.0.1:41238). Nov 1 00:13:14.624153 systemd-logind[1438]: Removed session 2. Nov 1 00:13:14.654352 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 41238 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:13:14.656286 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:13:14.660839 systemd-logind[1438]: New session 3 of user core. Nov 1 00:13:14.671848 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:13:14.724587 sshd[1578]: pam_unix(sshd:session): session closed for user core Nov 1 00:13:14.745046 systemd[1]: sshd@2-10.0.0.19:22-10.0.0.1:41238.service: Deactivated successfully. Nov 1 00:13:14.748439 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:13:14.750560 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:13:14.761521 systemd[1]: Started sshd@3-10.0.0.19:22-10.0.0.1:41240.service - OpenSSH per-connection server daemon (10.0.0.1:41240). Nov 1 00:13:14.763959 systemd-logind[1438]: Removed session 3. Nov 1 00:13:14.806780 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 41240 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:13:14.808528 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:13:14.813485 systemd-logind[1438]: New session 4 of user core. Nov 1 00:13:14.826963 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:13:14.883295 sshd[1585]: pam_unix(sshd:session): session closed for user core Nov 1 00:13:14.894957 systemd[1]: sshd@3-10.0.0.19:22-10.0.0.1:41240.service: Deactivated successfully. Nov 1 00:13:14.896976 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:13:14.898918 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:13:14.912982 systemd[1]: Started sshd@4-10.0.0.19:22-10.0.0.1:41246.service - OpenSSH per-connection server daemon (10.0.0.1:41246). Nov 1 00:13:14.914011 systemd-logind[1438]: Removed session 4. Nov 1 00:13:14.951717 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 41246 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:13:14.953703 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:13:14.958362 systemd-logind[1438]: New session 5 of user core. Nov 1 00:13:14.971831 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:13:15.034539 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:13:15.035100 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:13:15.054138 sudo[1595]: pam_unix(sudo:session): session closed for user root Nov 1 00:13:15.056213 sshd[1592]: pam_unix(sshd:session): session closed for user core Nov 1 00:13:15.063650 systemd[1]: sshd@4-10.0.0.19:22-10.0.0.1:41246.service: Deactivated successfully. Nov 1 00:13:15.065632 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:13:15.067334 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:13:15.069069 systemd[1]: Started sshd@5-10.0.0.19:22-10.0.0.1:41256.service - OpenSSH per-connection server daemon (10.0.0.1:41256). Nov 1 00:13:15.069913 systemd-logind[1438]: Removed session 5. Nov 1 00:13:15.105989 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 41256 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:13:15.108612 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:13:15.113703 systemd-logind[1438]: New session 6 of user core. Nov 1 00:13:15.122938 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:13:15.180596 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:13:15.181032 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:13:15.187062 sudo[1604]: pam_unix(sudo:session): session closed for user root Nov 1 00:13:15.196146 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:13:15.196591 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:13:15.223960 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:13:15.226497 auditctl[1607]: No rules Nov 1 00:13:15.228099 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:13:15.228398 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:13:15.230760 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:13:15.272085 augenrules[1625]: No rules Nov 1 00:13:15.274574 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:13:15.276090 sudo[1603]: pam_unix(sudo:session): session closed for user root Nov 1 00:13:15.278314 sshd[1600]: pam_unix(sshd:session): session closed for user core Nov 1 00:13:15.297680 systemd[1]: sshd@5-10.0.0.19:22-10.0.0.1:41256.service: Deactivated successfully. Nov 1 00:13:15.299989 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:13:15.301935 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:13:15.312993 systemd[1]: Started sshd@6-10.0.0.19:22-10.0.0.1:41262.service - OpenSSH per-connection server daemon (10.0.0.1:41262). Nov 1 00:13:15.314196 systemd-logind[1438]: Removed session 6. Nov 1 00:13:15.347005 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 41262 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:13:15.348764 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:13:15.353914 systemd-logind[1438]: New session 7 of user core. Nov 1 00:13:15.362954 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:13:15.421239 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:13:15.421604 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:13:16.110035 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:13:16.110239 (dockerd)[1655]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:13:16.821575 dockerd[1655]: time="2025-11-01T00:13:16.821463042Z" level=info msg="Starting up" Nov 1 00:13:17.632681 dockerd[1655]: time="2025-11-01T00:13:17.632567763Z" level=info msg="Loading containers: start." Nov 1 00:13:17.771747 kernel: Initializing XFRM netlink socket Nov 1 00:13:17.868961 systemd-networkd[1367]: docker0: Link UP Nov 1 00:13:17.892953 dockerd[1655]: time="2025-11-01T00:13:17.892799911Z" level=info msg="Loading containers: done." Nov 1 00:13:17.923197 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3396787064-merged.mount: Deactivated successfully. Nov 1 00:13:17.929543 dockerd[1655]: time="2025-11-01T00:13:17.929447238Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:13:17.929788 dockerd[1655]: time="2025-11-01T00:13:17.929619170Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:13:17.929837 dockerd[1655]: time="2025-11-01T00:13:17.929814867Z" level=info msg="Daemon has completed initialization" Nov 1 00:13:17.980933 dockerd[1655]: time="2025-11-01T00:13:17.980835760Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:13:17.981157 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:13:18.759832 containerd[1460]: time="2025-11-01T00:13:18.759766313Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 00:13:19.577080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1174581621.mount: Deactivated successfully. Nov 1 00:13:20.872426 containerd[1460]: time="2025-11-01T00:13:20.872330254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:20.873466 containerd[1460]: time="2025-11-01T00:13:20.873419046Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 1 00:13:20.875424 containerd[1460]: time="2025-11-01T00:13:20.875371548Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:20.880386 containerd[1460]: time="2025-11-01T00:13:20.880336519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:20.881657 containerd[1460]: time="2025-11-01T00:13:20.881606230Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.121775406s" Nov 1 00:13:20.881722 containerd[1460]: time="2025-11-01T00:13:20.881671502Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 1 00:13:20.882529 containerd[1460]: time="2025-11-01T00:13:20.882485238Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 00:13:22.371828 containerd[1460]: time="2025-11-01T00:13:22.371726307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:22.372865 containerd[1460]: time="2025-11-01T00:13:22.372753704Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 1 00:13:22.375026 containerd[1460]: time="2025-11-01T00:13:22.374965512Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:22.379556 containerd[1460]: time="2025-11-01T00:13:22.379495487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:22.381423 containerd[1460]: time="2025-11-01T00:13:22.381348762Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.498793834s" Nov 1 00:13:22.381488 containerd[1460]: time="2025-11-01T00:13:22.381440034Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 1 00:13:22.385585 containerd[1460]: time="2025-11-01T00:13:22.385547376Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 00:13:23.946049 containerd[1460]: time="2025-11-01T00:13:23.945966309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:23.947149 containerd[1460]: time="2025-11-01T00:13:23.947065841Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 1 00:13:23.950019 containerd[1460]: time="2025-11-01T00:13:23.949987941Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:23.954030 containerd[1460]: time="2025-11-01T00:13:23.953990908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:23.955373 containerd[1460]: time="2025-11-01T00:13:23.955338956Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.569749981s" Nov 1 00:13:23.955479 containerd[1460]: time="2025-11-01T00:13:23.955393899Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 1 00:13:23.956072 containerd[1460]: time="2025-11-01T00:13:23.956037606Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 00:13:24.639277 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:13:24.649115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:13:24.863106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:13:24.872538 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:13:25.266922 kubelet[1878]: E1101 00:13:25.266852 1878 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:13:25.273918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:13:25.274150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:13:26.532024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1946668309.mount: Deactivated successfully. Nov 1 00:13:26.850642 containerd[1460]: time="2025-11-01T00:13:26.850474719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:26.851573 containerd[1460]: time="2025-11-01T00:13:26.851523586Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 1 00:13:26.852786 containerd[1460]: time="2025-11-01T00:13:26.852746449Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:26.859569 containerd[1460]: time="2025-11-01T00:13:26.859519972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:26.860256 containerd[1460]: time="2025-11-01T00:13:26.860227068Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.904157783s" Nov 1 00:13:26.860321 containerd[1460]: time="2025-11-01T00:13:26.860259970Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 00:13:26.860939 containerd[1460]: time="2025-11-01T00:13:26.860859193Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 00:13:27.553428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029408955.mount: Deactivated successfully. Nov 1 00:13:29.198955 containerd[1460]: time="2025-11-01T00:13:29.198884465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:29.245109 containerd[1460]: time="2025-11-01T00:13:29.245012542Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 1 00:13:29.282542 containerd[1460]: time="2025-11-01T00:13:29.282467664Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:29.303636 containerd[1460]: time="2025-11-01T00:13:29.303590046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:29.304976 containerd[1460]: time="2025-11-01T00:13:29.304903830Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.444000704s" Nov 1 00:13:29.304976 containerd[1460]: time="2025-11-01T00:13:29.304962199Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 1 00:13:29.306077 containerd[1460]: time="2025-11-01T00:13:29.306045160Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 00:13:29.968753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007694742.mount: Deactivated successfully. Nov 1 00:13:29.982709 containerd[1460]: time="2025-11-01T00:13:29.982649334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:29.988581 containerd[1460]: time="2025-11-01T00:13:29.988522489Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 1 00:13:29.993057 containerd[1460]: time="2025-11-01T00:13:29.992989866Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:29.995577 containerd[1460]: time="2025-11-01T00:13:29.995534398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:29.996730 containerd[1460]: time="2025-11-01T00:13:29.996659128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 690.584522ms" Nov 1 00:13:29.996793 containerd[1460]: time="2025-11-01T00:13:29.996737885Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 1 00:13:29.997473 containerd[1460]: time="2025-11-01T00:13:29.997330336Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 00:13:35.389380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:13:35.404142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:13:35.644533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:13:35.655376 (kubelet)[1999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:13:36.283913 kubelet[1999]: E1101 00:13:36.282018 1999 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:13:36.292741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:13:36.293042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:13:37.167660 containerd[1460]: time="2025-11-01T00:13:37.166750318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:37.170481 containerd[1460]: time="2025-11-01T00:13:37.170370748Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 1 00:13:37.178321 containerd[1460]: time="2025-11-01T00:13:37.175417993Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:37.181814 containerd[1460]: time="2025-11-01T00:13:37.180509221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:13:37.184973 containerd[1460]: time="2025-11-01T00:13:37.184885768Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 7.187521408s" Nov 1 00:13:37.184973 containerd[1460]: time="2025-11-01T00:13:37.184940902Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 1 00:13:42.375631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:13:42.386664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:13:42.425029 systemd[1]: Reloading requested from client PID 2040 ('systemctl') (unit session-7.scope)... Nov 1 00:13:42.425056 systemd[1]: Reloading... Nov 1 00:13:42.535770 zram_generator::config[2079]: No configuration found. Nov 1 00:13:43.152970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:13:43.316224 systemd[1]: Reloading finished in 890 ms. Nov 1 00:13:43.403885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:13:43.411012 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:13:43.411341 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:13:43.412264 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:13:43.412595 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:13:43.434307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:13:43.705562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:13:43.712524 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:13:43.808273 kubelet[2130]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:13:43.808273 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:13:43.808982 kubelet[2130]: I1101 00:13:43.808287 2130 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:13:44.096777 kubelet[2130]: I1101 00:13:44.096555 2130 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:13:44.096777 kubelet[2130]: I1101 00:13:44.096602 2130 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:13:44.098228 kubelet[2130]: I1101 00:13:44.098184 2130 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:13:44.098228 kubelet[2130]: I1101 00:13:44.098209 2130 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:13:44.098529 kubelet[2130]: I1101 00:13:44.098492 2130 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:13:44.469163 kubelet[2130]: E1101 00:13:44.469051 2130 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:13:44.469381 kubelet[2130]: I1101 00:13:44.469249 2130 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:13:44.477879 kubelet[2130]: E1101 00:13:44.477802 2130 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:13:44.478066 kubelet[2130]: I1101 00:13:44.477901 2130 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:13:44.492373 kubelet[2130]: I1101 00:13:44.492289 2130 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:13:44.497812 kubelet[2130]: I1101 00:13:44.497613 2130 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:13:44.498044 kubelet[2130]: I1101 00:13:44.497722 2130 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:13:44.498242 kubelet[2130]: I1101 00:13:44.498050 2130 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:13:44.498242 kubelet[2130]: I1101 00:13:44.498067 2130 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:13:44.498386 kubelet[2130]: I1101 00:13:44.498339 2130 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:13:44.509388 kubelet[2130]: I1101 00:13:44.508124 2130 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:13:44.513564 kubelet[2130]: I1101 00:13:44.512489 2130 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:13:44.513564 kubelet[2130]: I1101 00:13:44.512545 2130 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:13:44.514270 kubelet[2130]: I1101 00:13:44.513933 2130 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:13:44.514270 kubelet[2130]: I1101 00:13:44.514009 2130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:13:44.515093 kubelet[2130]: E1101 00:13:44.515021 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:13:44.515565 kubelet[2130]: E1101 00:13:44.515500 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:13:44.533128 kubelet[2130]: I1101 00:13:44.533073 2130 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:13:44.533929 kubelet[2130]: I1101 00:13:44.533882 2130 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:13:44.533929 kubelet[2130]: I1101 00:13:44.533928 2130 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:13:44.534043 kubelet[2130]: W1101 00:13:44.534022 2130 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:13:44.538641 kubelet[2130]: I1101 00:13:44.538157 2130 server.go:1262] "Started kubelet" Nov 1 00:13:44.539598 kubelet[2130]: I1101 00:13:44.538964 2130 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:13:44.539598 kubelet[2130]: I1101 00:13:44.539028 2130 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:13:44.540093 kubelet[2130]: I1101 00:13:44.539778 2130 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:13:44.540093 kubelet[2130]: I1101 00:13:44.539911 2130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:13:44.540589 kubelet[2130]: I1101 00:13:44.540557 2130 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:13:44.551220 kubelet[2130]: I1101 00:13:44.548308 2130 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:13:44.551605 kubelet[2130]: I1101 00:13:44.551580 2130 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:13:44.552061 kubelet[2130]: I1101 00:13:44.552006 2130 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:13:44.552211 kubelet[2130]: E1101 00:13:44.552188 2130 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:13:44.553005 kubelet[2130]: I1101 00:13:44.552737 2130 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:13:44.553005 kubelet[2130]: I1101 00:13:44.552855 2130 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:13:44.555126 kubelet[2130]: I1101 00:13:44.555070 2130 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:13:44.555544 kubelet[2130]: E1101 00:13:44.555255 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:13:44.555544 kubelet[2130]: E1101 00:13:44.555406 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="200ms" Nov 1 00:13:44.557245 kubelet[2130]: E1101 00:13:44.557195 2130 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:13:44.557623 kubelet[2130]: I1101 00:13:44.557585 2130 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:13:44.557623 kubelet[2130]: I1101 00:13:44.557616 2130 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:13:44.576457 kubelet[2130]: I1101 00:13:44.576359 2130 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:13:44.578925 kubelet[2130]: I1101 00:13:44.578886 2130 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:13:44.579245 kubelet[2130]: I1101 00:13:44.578937 2130 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:13:44.579245 kubelet[2130]: I1101 00:13:44.578983 2130 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:13:44.579245 kubelet[2130]: E1101 00:13:44.579053 2130 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:13:44.580042 kubelet[2130]: E1101 00:13:44.580002 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:13:44.589674 kubelet[2130]: I1101 00:13:44.589620 2130 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:13:44.589674 kubelet[2130]: I1101 00:13:44.589655 2130 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:13:44.589888 kubelet[2130]: I1101 00:13:44.589717 2130 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:13:44.653344 kubelet[2130]: E1101 00:13:44.653237 2130 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:13:44.670185 kubelet[2130]: I1101 00:13:44.670089 2130 policy_none.go:49] "None policy: Start" Nov 1 00:13:44.670185 kubelet[2130]: I1101 00:13:44.670162 2130 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:13:44.670185 kubelet[2130]: I1101 00:13:44.670188 2130 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:13:44.679617 kubelet[2130]: E1101 00:13:44.679550 2130 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:13:44.704419 kubelet[2130]: I1101 00:13:44.703664 2130 policy_none.go:47] "Start" Nov 1 00:13:44.722454 kubelet[2130]: E1101 00:13:44.617999 2130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873b9ab4873bc85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:13:44.538086533 +0000 UTC m=+0.816193764,LastTimestamp:2025-11-01 00:13:44.538086533 +0000 UTC m=+0.816193764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:13:44.754244 kubelet[2130]: E1101 00:13:44.754045 2130 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:13:44.756229 kubelet[2130]: E1101 00:13:44.756172 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="400ms" Nov 1 00:13:44.756403 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 00:13:44.797022 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 00:13:44.806945 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 00:13:44.818863 kubelet[2130]: E1101 00:13:44.818796 2130 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:13:44.819634 kubelet[2130]: I1101 00:13:44.819170 2130 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:13:44.819634 kubelet[2130]: I1101 00:13:44.819190 2130 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:13:44.819831 kubelet[2130]: I1101 00:13:44.819659 2130 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:13:44.823508 kubelet[2130]: E1101 00:13:44.823438 2130 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:13:44.823508 kubelet[2130]: E1101 00:13:44.823506 2130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 00:13:44.900073 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 1 00:13:44.913783 kubelet[2130]: E1101 00:13:44.913375 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:13:44.919420 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 1 00:13:44.922806 kubelet[2130]: I1101 00:13:44.922769 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:13:44.923232 kubelet[2130]: E1101 00:13:44.923195 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Nov 1 00:13:44.924151 kubelet[2130]: E1101 00:13:44.924124 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:13:44.927046 systemd[1]: Created slice kubepods-burstable-pod17a77e258b255505273dfe50bd9b7bf0.slice - libcontainer container kubepods-burstable-pod17a77e258b255505273dfe50bd9b7bf0.slice. Nov 1 00:13:44.929617 kubelet[2130]: E1101 00:13:44.929556 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:13:44.956354 kubelet[2130]: I1101 00:13:44.954943 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:44.956354 kubelet[2130]: I1101 00:13:44.955335 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:44.956354 kubelet[2130]: I1101 00:13:44.956329 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:44.956354 kubelet[2130]: I1101 00:13:44.956362 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:44.956354 kubelet[2130]: I1101 00:13:44.956385 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/17a77e258b255505273dfe50bd9b7bf0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"17a77e258b255505273dfe50bd9b7bf0\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:13:44.957126 kubelet[2130]: I1101 00:13:44.956405 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:44.957126 kubelet[2130]: I1101 00:13:44.956427 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:13:44.957126 kubelet[2130]: I1101 00:13:44.956445 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/17a77e258b255505273dfe50bd9b7bf0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"17a77e258b255505273dfe50bd9b7bf0\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:13:44.957126 kubelet[2130]: I1101 00:13:44.956472 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/17a77e258b255505273dfe50bd9b7bf0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"17a77e258b255505273dfe50bd9b7bf0\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:13:45.139677 kubelet[2130]: I1101 00:13:45.139618 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:13:45.143055 kubelet[2130]: E1101 00:13:45.142945 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Nov 1 00:13:45.157204 kubelet[2130]: E1101 00:13:45.157089 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="800ms" Nov 1 00:13:45.318723 kubelet[2130]: E1101 00:13:45.318394 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:45.321431 containerd[1460]: time="2025-11-01T00:13:45.321277298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 1 00:13:45.331264 kubelet[2130]: E1101 00:13:45.330215 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:45.331424 containerd[1460]: time="2025-11-01T00:13:45.331011446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 1 00:13:45.336823 kubelet[2130]: E1101 00:13:45.336733 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:45.339199 containerd[1460]: time="2025-11-01T00:13:45.339093272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:17a77e258b255505273dfe50bd9b7bf0,Namespace:kube-system,Attempt:0,}" Nov 1 00:13:45.552727 kubelet[2130]: I1101 00:13:45.552517 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:13:45.553115 kubelet[2130]: E1101 00:13:45.553080 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Nov 1 00:13:45.620753 kernel: hrtimer: interrupt took 9466194 ns Nov 1 00:13:45.631256 kubelet[2130]: E1101 00:13:45.615510 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:13:45.914727 kubelet[2130]: E1101 00:13:45.914631 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:13:45.918966 kubelet[2130]: E1101 00:13:45.918882 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:13:45.958370 kubelet[2130]: E1101 00:13:45.958273 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="1.6s" Nov 1 00:13:46.029292 kubelet[2130]: E1101 00:13:46.029206 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:13:46.116652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1734486092.mount: Deactivated successfully. Nov 1 00:13:46.146258 containerd[1460]: time="2025-11-01T00:13:46.143250268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:13:46.151020 containerd[1460]: time="2025-11-01T00:13:46.150890445Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 00:13:46.158487 containerd[1460]: time="2025-11-01T00:13:46.158415330Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:13:46.169779 containerd[1460]: time="2025-11-01T00:13:46.169439164Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:13:46.174724 containerd[1460]: time="2025-11-01T00:13:46.174510857Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:13:46.178962 containerd[1460]: time="2025-11-01T00:13:46.176494680Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:13:46.178962 containerd[1460]: time="2025-11-01T00:13:46.177178591Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:13:46.188059 containerd[1460]: time="2025-11-01T00:13:46.187970480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:13:46.192022 containerd[1460]: time="2025-11-01T00:13:46.191896667Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 870.476365ms" Nov 1 00:13:46.198924 containerd[1460]: time="2025-11-01T00:13:46.198835890Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 859.603912ms" Nov 1 00:13:46.202661 containerd[1460]: time="2025-11-01T00:13:46.202592732Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 871.490872ms" Nov 1 00:13:46.355127 kubelet[2130]: I1101 00:13:46.355085 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:13:46.357843 kubelet[2130]: E1101 00:13:46.357786 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Nov 1 00:13:46.501086 kubelet[2130]: E1101 00:13:46.500871 2130 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:13:46.828609 containerd[1460]: time="2025-11-01T00:13:46.826822292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:13:46.828609 containerd[1460]: time="2025-11-01T00:13:46.827074606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:13:46.828609 containerd[1460]: time="2025-11-01T00:13:46.827146694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:13:46.828609 containerd[1460]: time="2025-11-01T00:13:46.827662142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:13:46.854079 containerd[1460]: time="2025-11-01T00:13:46.847265704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:13:46.854079 containerd[1460]: time="2025-11-01T00:13:46.853532958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:13:46.854079 containerd[1460]: time="2025-11-01T00:13:46.853603083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:13:46.854079 containerd[1460]: time="2025-11-01T00:13:46.853840638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:13:46.887544 systemd[1]: Started cri-containerd-57fa7fc1affe352c679551ac6836b09c7120896f808d65dd92ff91be1c4362ab.scope - libcontainer container 57fa7fc1affe352c679551ac6836b09c7120896f808d65dd92ff91be1c4362ab. Nov 1 00:13:46.899214 containerd[1460]: time="2025-11-01T00:13:46.898768849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:13:46.899214 containerd[1460]: time="2025-11-01T00:13:46.898863561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:13:46.899214 containerd[1460]: time="2025-11-01T00:13:46.898878490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:13:46.899214 containerd[1460]: time="2025-11-01T00:13:46.899022266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:13:46.902315 systemd[1]: Started cri-containerd-dce7eda1afb5897f493b77a35205af1a10bfe7b778f15b204d82159c6f44caff.scope - libcontainer container dce7eda1afb5897f493b77a35205af1a10bfe7b778f15b204d82159c6f44caff. Nov 1 00:13:47.073035 systemd[1]: Started cri-containerd-f8f84a7bed1892b2ddfd656ec1906eba574ab368a98dda79859153e7dcf93af2.scope - libcontainer container f8f84a7bed1892b2ddfd656ec1906eba574ab368a98dda79859153e7dcf93af2. Nov 1 00:13:47.111167 containerd[1460]: time="2025-11-01T00:13:47.111003190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"57fa7fc1affe352c679551ac6836b09c7120896f808d65dd92ff91be1c4362ab\"" Nov 1 00:13:47.113135 kubelet[2130]: E1101 00:13:47.113099 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:47.119987 containerd[1460]: time="2025-11-01T00:13:47.119379928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:17a77e258b255505273dfe50bd9b7bf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"dce7eda1afb5897f493b77a35205af1a10bfe7b778f15b204d82159c6f44caff\"" Nov 1 00:13:47.120733 kubelet[2130]: E1101 00:13:47.120648 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:47.140749 containerd[1460]: time="2025-11-01T00:13:47.140515611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8f84a7bed1892b2ddfd656ec1906eba574ab368a98dda79859153e7dcf93af2\"" Nov 1 00:13:47.141722 kubelet[2130]: E1101 00:13:47.141625 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:47.156704 containerd[1460]: time="2025-11-01T00:13:47.156609824Z" level=info msg="CreateContainer within sandbox \"57fa7fc1affe352c679551ac6836b09c7120896f808d65dd92ff91be1c4362ab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:13:47.198317 containerd[1460]: time="2025-11-01T00:13:47.198236306Z" level=info msg="CreateContainer within sandbox \"dce7eda1afb5897f493b77a35205af1a10bfe7b778f15b204d82159c6f44caff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:13:47.225900 containerd[1460]: time="2025-11-01T00:13:47.225830565Z" level=info msg="CreateContainer within sandbox \"f8f84a7bed1892b2ddfd656ec1906eba574ab368a98dda79859153e7dcf93af2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:13:47.463454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1799614114.mount: Deactivated successfully. Nov 1 00:13:47.480110 kubelet[2130]: E1101 00:13:47.480038 2130 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:13:47.485003 containerd[1460]: time="2025-11-01T00:13:47.484911079Z" level=info msg="CreateContainer within sandbox \"57fa7fc1affe352c679551ac6836b09c7120896f808d65dd92ff91be1c4362ab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"42d6a6d7a251f59be9c392d731a3ac7f17a8bcc10e5bfac7a11baf0d4dd2f3ba\"" Nov 1 00:13:47.485909 containerd[1460]: time="2025-11-01T00:13:47.485872158Z" level=info msg="StartContainer for \"42d6a6d7a251f59be9c392d731a3ac7f17a8bcc10e5bfac7a11baf0d4dd2f3ba\"" Nov 1 00:13:47.493552 containerd[1460]: time="2025-11-01T00:13:47.493472048Z" level=info msg="CreateContainer within sandbox \"dce7eda1afb5897f493b77a35205af1a10bfe7b778f15b204d82159c6f44caff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"95f45f0d7ecc26e29c197dc30c0c317e9c04f941757ade847848649387e863ce\"" Nov 1 00:13:47.495111 containerd[1460]: time="2025-11-01T00:13:47.494128134Z" level=info msg="StartContainer for \"95f45f0d7ecc26e29c197dc30c0c317e9c04f941757ade847848649387e863ce\"" Nov 1 00:13:47.495739 containerd[1460]: time="2025-11-01T00:13:47.495712177Z" level=info msg="CreateContainer within sandbox \"f8f84a7bed1892b2ddfd656ec1906eba574ab368a98dda79859153e7dcf93af2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c664cd321be2c1aa4e8672008337904cb4239c7cbd1b2b94fb1a4c92d72b13ca\"" Nov 1 00:13:47.496310 containerd[1460]: time="2025-11-01T00:13:47.496282448Z" level=info msg="StartContainer for \"c664cd321be2c1aa4e8672008337904cb4239c7cbd1b2b94fb1a4c92d72b13ca\"" Nov 1 00:13:47.526891 systemd[1]: Started cri-containerd-42d6a6d7a251f59be9c392d731a3ac7f17a8bcc10e5bfac7a11baf0d4dd2f3ba.scope - libcontainer container 42d6a6d7a251f59be9c392d731a3ac7f17a8bcc10e5bfac7a11baf0d4dd2f3ba. Nov 1 00:13:47.559703 kubelet[2130]: E1101 00:13:47.559583 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="3.2s" Nov 1 00:13:47.584895 systemd[1]: Started cri-containerd-95f45f0d7ecc26e29c197dc30c0c317e9c04f941757ade847848649387e863ce.scope - libcontainer container 95f45f0d7ecc26e29c197dc30c0c317e9c04f941757ade847848649387e863ce. Nov 1 00:13:47.587362 systemd[1]: Started cri-containerd-c664cd321be2c1aa4e8672008337904cb4239c7cbd1b2b94fb1a4c92d72b13ca.scope - libcontainer container c664cd321be2c1aa4e8672008337904cb4239c7cbd1b2b94fb1a4c92d72b13ca. Nov 1 00:13:47.674306 containerd[1460]: time="2025-11-01T00:13:47.674233372Z" level=info msg="StartContainer for \"95f45f0d7ecc26e29c197dc30c0c317e9c04f941757ade847848649387e863ce\" returns successfully" Nov 1 00:13:47.674882 containerd[1460]: time="2025-11-01T00:13:47.674437803Z" level=info msg="StartContainer for \"42d6a6d7a251f59be9c392d731a3ac7f17a8bcc10e5bfac7a11baf0d4dd2f3ba\" returns successfully" Nov 1 00:13:47.674882 containerd[1460]: time="2025-11-01T00:13:47.674473100Z" level=info msg="StartContainer for \"c664cd321be2c1aa4e8672008337904cb4239c7cbd1b2b94fb1a4c92d72b13ca\" returns successfully" Nov 1 00:13:47.959938 kubelet[2130]: I1101 00:13:47.959891 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:13:48.663009 kubelet[2130]: E1101 00:13:48.662971 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:13:48.663527 kubelet[2130]: E1101 00:13:48.663140 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:48.669742 kubelet[2130]: E1101 00:13:48.669711 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:13:48.669875 kubelet[2130]: E1101 00:13:48.669834 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:48.670626 kubelet[2130]: E1101 00:13:48.670602 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:13:48.670755 kubelet[2130]: E1101 00:13:48.670733 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:49.672944 kubelet[2130]: E1101 00:13:49.672572 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:13:49.672944 kubelet[2130]: E1101 00:13:49.672811 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:49.673728 kubelet[2130]: E1101 00:13:49.673018 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:13:49.673728 kubelet[2130]: E1101 00:13:49.673268 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:49.674375 kubelet[2130]: E1101 00:13:49.673748 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:13:49.674375 kubelet[2130]: E1101 00:13:49.674000 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:50.677440 kubelet[2130]: E1101 00:13:50.677362 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:13:50.678258 kubelet[2130]: E1101 00:13:50.677602 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:50.679611 kubelet[2130]: E1101 00:13:50.678993 2130 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:13:50.679611 kubelet[2130]: E1101 00:13:50.679214 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:50.923474 kubelet[2130]: E1101 00:13:50.923226 2130 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 00:13:51.350314 kubelet[2130]: I1101 00:13:51.350242 2130 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:13:51.353216 kubelet[2130]: I1101 00:13:51.353168 2130 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:51.362581 kubelet[2130]: E1101 00:13:51.362536 2130 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:51.362581 kubelet[2130]: I1101 00:13:51.362568 2130 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:13:51.364462 kubelet[2130]: E1101 00:13:51.364413 2130 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:13:51.364521 kubelet[2130]: I1101 00:13:51.364488 2130 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:13:51.366461 kubelet[2130]: E1101 00:13:51.366429 2130 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:13:51.534675 kubelet[2130]: I1101 00:13:51.533467 2130 apiserver.go:52] "Watching apiserver" Nov 1 00:13:51.554340 kubelet[2130]: I1101 00:13:51.553732 2130 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:13:52.803875 kubelet[2130]: I1101 00:13:52.803800 2130 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:13:52.821918 kubelet[2130]: E1101 00:13:52.821748 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:53.495009 kubelet[2130]: I1101 00:13:53.494949 2130 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:13:53.654555 kubelet[2130]: E1101 00:13:53.654316 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:53.681659 kubelet[2130]: E1101 00:13:53.681606 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:53.681837 kubelet[2130]: E1101 00:13:53.681624 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:54.705639 kubelet[2130]: I1101 00:13:54.705554 2130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7055275399999998 podStartE2EDuration="1.70552754s" podCreationTimestamp="2025-11-01 00:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:13:54.683794712 +0000 UTC m=+10.961901943" watchObservedRunningTime="2025-11-01 00:13:54.70552754 +0000 UTC m=+10.983634791" Nov 1 00:13:54.777890 systemd[1]: Reloading requested from client PID 2418 ('systemctl') (unit session-7.scope)... Nov 1 00:13:54.777926 systemd[1]: Reloading... Nov 1 00:13:55.044752 zram_generator::config[2464]: No configuration found. Nov 1 00:13:55.172956 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:13:55.281160 systemd[1]: Reloading finished in 502 ms. Nov 1 00:13:55.330808 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:13:55.353412 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:13:55.353795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:13:55.353880 systemd[1]: kubelet.service: Consumed 1.583s CPU time, 133.4M memory peak, 0B memory swap peak. Nov 1 00:13:55.363079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:13:55.554014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:13:55.560890 (kubelet)[2506]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:13:55.623909 kubelet[2506]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:13:55.623909 kubelet[2506]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:13:55.623909 kubelet[2506]: I1101 00:13:55.622921 2506 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:13:55.632277 kubelet[2506]: I1101 00:13:55.631918 2506 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:13:55.632491 kubelet[2506]: I1101 00:13:55.632477 2506 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:13:55.632603 kubelet[2506]: I1101 00:13:55.632589 2506 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:13:55.632760 kubelet[2506]: I1101 00:13:55.632667 2506 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:13:55.633395 kubelet[2506]: I1101 00:13:55.633373 2506 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:13:55.635771 kubelet[2506]: I1101 00:13:55.635738 2506 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:13:55.638371 kubelet[2506]: I1101 00:13:55.638340 2506 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:13:55.643642 kubelet[2506]: E1101 00:13:55.643522 2506 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:13:55.643642 kubelet[2506]: I1101 00:13:55.643616 2506 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:13:55.649853 kubelet[2506]: I1101 00:13:55.649774 2506 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:13:55.650285 kubelet[2506]: I1101 00:13:55.650223 2506 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:13:55.650444 kubelet[2506]: I1101 00:13:55.650273 2506 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:13:55.650444 kubelet[2506]: I1101 00:13:55.650439 2506 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:13:55.650591 kubelet[2506]: I1101 00:13:55.650448 2506 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:13:55.650591 kubelet[2506]: I1101 00:13:55.650475 2506 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:13:55.651444 kubelet[2506]: I1101 00:13:55.651419 2506 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:13:55.651722 kubelet[2506]: I1101 00:13:55.651703 2506 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:13:55.651789 kubelet[2506]: I1101 00:13:55.651732 2506 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:13:55.651826 kubelet[2506]: I1101 00:13:55.651767 2506 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:13:55.652959 kubelet[2506]: I1101 00:13:55.651852 2506 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:13:55.656921 kubelet[2506]: I1101 00:13:55.656874 2506 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:13:55.657805 kubelet[2506]: I1101 00:13:55.657765 2506 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:13:55.659347 kubelet[2506]: I1101 00:13:55.657821 2506 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:13:55.668565 kubelet[2506]: I1101 00:13:55.667194 2506 server.go:1262] "Started kubelet" Nov 1 00:13:55.668565 kubelet[2506]: I1101 00:13:55.668245 2506 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:13:55.669063 kubelet[2506]: I1101 00:13:55.669038 2506 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:13:55.670646 kubelet[2506]: I1101 00:13:55.670585 2506 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:13:55.671548 kubelet[2506]: I1101 00:13:55.670387 2506 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:13:55.673425 kubelet[2506]: I1101 00:13:55.672676 2506 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:13:55.673425 kubelet[2506]: I1101 00:13:55.671529 2506 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:13:55.673809 kubelet[2506]: I1101 00:13:55.673782 2506 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:13:55.675781 kubelet[2506]: I1101 00:13:55.674887 2506 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:13:55.677046 kubelet[2506]: I1101 00:13:55.676422 2506 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:13:55.677046 kubelet[2506]: I1101 00:13:55.676769 2506 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:13:55.677046 kubelet[2506]: I1101 00:13:55.676828 2506 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:13:55.677046 kubelet[2506]: I1101 00:13:55.676897 2506 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:13:55.678559 kubelet[2506]: I1101 00:13:55.678539 2506 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:13:55.680367 kubelet[2506]: E1101 00:13:55.680331 2506 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:13:55.693542 kubelet[2506]: I1101 00:13:55.693394 2506 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:13:55.695073 kubelet[2506]: I1101 00:13:55.695045 2506 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:13:55.695073 kubelet[2506]: I1101 00:13:55.695073 2506 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:13:55.695185 kubelet[2506]: I1101 00:13:55.695106 2506 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:13:55.695185 kubelet[2506]: E1101 00:13:55.695167 2506 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:13:55.728846 kubelet[2506]: I1101 00:13:55.728809 2506 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:13:55.729047 kubelet[2506]: I1101 00:13:55.729031 2506 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:13:55.729111 kubelet[2506]: I1101 00:13:55.729102 2506 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:13:55.729299 kubelet[2506]: I1101 00:13:55.729284 2506 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:13:55.729367 kubelet[2506]: I1101 00:13:55.729344 2506 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:13:55.729422 kubelet[2506]: I1101 00:13:55.729413 2506 policy_none.go:49] "None policy: Start" Nov 1 00:13:55.729475 kubelet[2506]: I1101 00:13:55.729466 2506 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:13:55.729549 kubelet[2506]: I1101 00:13:55.729536 2506 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:13:55.729807 kubelet[2506]: I1101 00:13:55.729790 2506 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 00:13:55.729878 kubelet[2506]: I1101 00:13:55.729869 2506 policy_none.go:47] "Start" Nov 1 00:13:55.735702 kubelet[2506]: E1101 00:13:55.735470 2506 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:13:55.736436 kubelet[2506]: I1101 00:13:55.736419 2506 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:13:55.736579 kubelet[2506]: I1101 00:13:55.736533 2506 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:13:55.737148 kubelet[2506]: I1101 00:13:55.737041 2506 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:13:55.740398 kubelet[2506]: E1101 00:13:55.740368 2506 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:13:55.796880 kubelet[2506]: I1101 00:13:55.796810 2506 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:55.797080 kubelet[2506]: I1101 00:13:55.796925 2506 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:13:55.797134 kubelet[2506]: I1101 00:13:55.796813 2506 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:13:55.808658 kubelet[2506]: E1101 00:13:55.808152 2506 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:13:55.809940 kubelet[2506]: E1101 00:13:55.809835 2506 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:13:55.849851 kubelet[2506]: I1101 00:13:55.849761 2506 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:13:55.861780 kubelet[2506]: I1101 00:13:55.861735 2506 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 00:13:55.862080 kubelet[2506]: I1101 00:13:55.861876 2506 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:13:55.878453 kubelet[2506]: I1101 00:13:55.878391 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/17a77e258b255505273dfe50bd9b7bf0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"17a77e258b255505273dfe50bd9b7bf0\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:13:55.878453 kubelet[2506]: I1101 00:13:55.878443 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/17a77e258b255505273dfe50bd9b7bf0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"17a77e258b255505273dfe50bd9b7bf0\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:13:55.878667 kubelet[2506]: I1101 00:13:55.878482 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/17a77e258b255505273dfe50bd9b7bf0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"17a77e258b255505273dfe50bd9b7bf0\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:13:55.878667 kubelet[2506]: I1101 00:13:55.878516 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:55.878667 kubelet[2506]: I1101 00:13:55.878553 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:55.878667 kubelet[2506]: I1101 00:13:55.878579 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:55.878667 kubelet[2506]: I1101 00:13:55.878598 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:55.878967 kubelet[2506]: I1101 00:13:55.878615 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:13:55.878967 kubelet[2506]: I1101 00:13:55.878632 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:56.109348 kubelet[2506]: E1101 00:13:56.109033 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:56.109810 kubelet[2506]: E1101 00:13:56.109702 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:56.110977 kubelet[2506]: E1101 00:13:56.110741 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:56.406927 update_engine[1443]: I20251101 00:13:56.406802 1443 update_attempter.cc:509] Updating boot flags... Nov 1 00:13:56.653505 kubelet[2506]: I1101 00:13:56.653425 2506 apiserver.go:52] "Watching apiserver" Nov 1 00:13:56.676913 kubelet[2506]: I1101 00:13:56.676646 2506 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:13:56.714982 kubelet[2506]: I1101 00:13:56.710134 2506 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:13:56.714982 kubelet[2506]: I1101 00:13:56.710645 2506 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:56.714982 kubelet[2506]: I1101 00:13:56.712500 2506 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:13:56.726929 kubelet[2506]: E1101 00:13:56.726881 2506 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:13:56.727190 kubelet[2506]: E1101 00:13:56.727126 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:56.730361 kubelet[2506]: E1101 00:13:56.730306 2506 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:13:56.730802 kubelet[2506]: E1101 00:13:56.730716 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:56.738215 kubelet[2506]: E1101 00:13:56.738130 2506 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:13:56.738879 kubelet[2506]: E1101 00:13:56.738473 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:56.759726 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2560) Nov 1 00:13:56.763279 kubelet[2506]: I1101 00:13:56.762374 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.762351401 podStartE2EDuration="1.762351401s" podCreationTimestamp="2025-11-01 00:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:13:56.761872422 +0000 UTC m=+1.189773995" watchObservedRunningTime="2025-11-01 00:13:56.762351401 +0000 UTC m=+1.190252954" Nov 1 00:13:56.830742 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2561) Nov 1 00:13:57.713300 kubelet[2506]: E1101 00:13:57.713227 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:57.713952 kubelet[2506]: E1101 00:13:57.713631 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:57.714637 kubelet[2506]: E1101 00:13:57.714398 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:58.715038 kubelet[2506]: E1101 00:13:58.714993 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:58.715727 kubelet[2506]: E1101 00:13:58.715143 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:59.716943 kubelet[2506]: E1101 00:13:59.716900 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:59.717898 kubelet[2506]: E1101 00:13:59.717108 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:13:59.736850 kubelet[2506]: I1101 00:13:59.736775 2506 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:13:59.737212 containerd[1460]: time="2025-11-01T00:13:59.737173732Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:13:59.737738 kubelet[2506]: I1101 00:13:59.737402 2506 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:14:00.718969 kubelet[2506]: E1101 00:14:00.718900 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:00.975085 systemd[1]: Created slice kubepods-besteffort-pod792fc0d8_c05f_4946_ad9b_64fe1c63cc97.slice - libcontainer container kubepods-besteffort-pod792fc0d8_c05f_4946_ad9b_64fe1c63cc97.slice. Nov 1 00:14:01.009135 systemd[1]: Created slice kubepods-besteffort-pod330cee2b_1003_496a_b939_76e4f4f890e6.slice - libcontainer container kubepods-besteffort-pod330cee2b_1003_496a_b939_76e4f4f890e6.slice. Nov 1 00:14:01.116003 kubelet[2506]: I1101 00:14:01.115925 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/792fc0d8-c05f-4946-ad9b-64fe1c63cc97-kube-proxy\") pod \"kube-proxy-rwz9b\" (UID: \"792fc0d8-c05f-4946-ad9b-64fe1c63cc97\") " pod="kube-system/kube-proxy-rwz9b" Nov 1 00:14:01.116003 kubelet[2506]: I1101 00:14:01.115985 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/792fc0d8-c05f-4946-ad9b-64fe1c63cc97-xtables-lock\") pod \"kube-proxy-rwz9b\" (UID: \"792fc0d8-c05f-4946-ad9b-64fe1c63cc97\") " pod="kube-system/kube-proxy-rwz9b" Nov 1 00:14:01.116003 kubelet[2506]: I1101 00:14:01.116004 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/792fc0d8-c05f-4946-ad9b-64fe1c63cc97-lib-modules\") pod \"kube-proxy-rwz9b\" (UID: \"792fc0d8-c05f-4946-ad9b-64fe1c63cc97\") " pod="kube-system/kube-proxy-rwz9b" Nov 1 00:14:01.116003 kubelet[2506]: I1101 00:14:01.116022 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/330cee2b-1003-496a-b939-76e4f4f890e6-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-mgtxg\" (UID: \"330cee2b-1003-496a-b939-76e4f4f890e6\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-mgtxg" Nov 1 00:14:01.116330 kubelet[2506]: I1101 00:14:01.116046 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx2jt\" (UniqueName: \"kubernetes.io/projected/792fc0d8-c05f-4946-ad9b-64fe1c63cc97-kube-api-access-qx2jt\") pod \"kube-proxy-rwz9b\" (UID: \"792fc0d8-c05f-4946-ad9b-64fe1c63cc97\") " pod="kube-system/kube-proxy-rwz9b" Nov 1 00:14:01.116330 kubelet[2506]: I1101 00:14:01.116063 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cld9q\" (UniqueName: \"kubernetes.io/projected/330cee2b-1003-496a-b939-76e4f4f890e6-kube-api-access-cld9q\") pod \"tigera-operator-65cdcdfd6d-mgtxg\" (UID: \"330cee2b-1003-496a-b939-76e4f4f890e6\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-mgtxg" Nov 1 00:14:01.293983 kubelet[2506]: E1101 00:14:01.293856 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:01.294831 containerd[1460]: time="2025-11-01T00:14:01.294779036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rwz9b,Uid:792fc0d8-c05f-4946-ad9b-64fe1c63cc97,Namespace:kube-system,Attempt:0,}" Nov 1 00:14:01.324901 containerd[1460]: time="2025-11-01T00:14:01.324736462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:14:01.324901 containerd[1460]: time="2025-11-01T00:14:01.324838074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:14:01.324901 containerd[1460]: time="2025-11-01T00:14:01.324858432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:01.325173 containerd[1460]: time="2025-11-01T00:14:01.325020059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:01.353852 systemd[1]: Started cri-containerd-ea737e23e3ead5d666a393fdab912da6c7ff49a57e53e730a9c5a7cb3af6c8dc.scope - libcontainer container ea737e23e3ead5d666a393fdab912da6c7ff49a57e53e730a9c5a7cb3af6c8dc. Nov 1 00:14:01.384423 containerd[1460]: time="2025-11-01T00:14:01.384370272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rwz9b,Uid:792fc0d8-c05f-4946-ad9b-64fe1c63cc97,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea737e23e3ead5d666a393fdab912da6c7ff49a57e53e730a9c5a7cb3af6c8dc\"" Nov 1 00:14:01.385621 kubelet[2506]: E1101 00:14:01.385586 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:01.473003 containerd[1460]: time="2025-11-01T00:14:01.472948031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-mgtxg,Uid:330cee2b-1003-496a-b939-76e4f4f890e6,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:14:01.603142 containerd[1460]: time="2025-11-01T00:14:01.602986019Z" level=info msg="CreateContainer within sandbox \"ea737e23e3ead5d666a393fdab912da6c7ff49a57e53e730a9c5a7cb3af6c8dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:14:01.836480 containerd[1460]: time="2025-11-01T00:14:01.836293341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:14:01.836480 containerd[1460]: time="2025-11-01T00:14:01.836398611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:14:01.836480 containerd[1460]: time="2025-11-01T00:14:01.836412607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:01.836794 containerd[1460]: time="2025-11-01T00:14:01.836528045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:01.860984 systemd[1]: Started cri-containerd-87ae48397c7d784c2b46c1909eb29e1f7fdd8ab0aa0fd038e8c97856a84fa7c9.scope - libcontainer container 87ae48397c7d784c2b46c1909eb29e1f7fdd8ab0aa0fd038e8c97856a84fa7c9. Nov 1 00:14:01.893198 containerd[1460]: time="2025-11-01T00:14:01.893113039Z" level=info msg="CreateContainer within sandbox \"ea737e23e3ead5d666a393fdab912da6c7ff49a57e53e730a9c5a7cb3af6c8dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bebbbccf5dd5c938f9511bbe7e3f8995fdf5a4782505be1231e2a3ea82e266d0\"" Nov 1 00:14:01.894312 containerd[1460]: time="2025-11-01T00:14:01.894158136Z" level=info msg="StartContainer for \"bebbbccf5dd5c938f9511bbe7e3f8995fdf5a4782505be1231e2a3ea82e266d0\"" Nov 1 00:14:01.915900 containerd[1460]: time="2025-11-01T00:14:01.915838507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-mgtxg,Uid:330cee2b-1003-496a-b939-76e4f4f890e6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"87ae48397c7d784c2b46c1909eb29e1f7fdd8ab0aa0fd038e8c97856a84fa7c9\"" Nov 1 00:14:01.919635 containerd[1460]: time="2025-11-01T00:14:01.919285727Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:14:01.939861 systemd[1]: Started cri-containerd-bebbbccf5dd5c938f9511bbe7e3f8995fdf5a4782505be1231e2a3ea82e266d0.scope - libcontainer container bebbbccf5dd5c938f9511bbe7e3f8995fdf5a4782505be1231e2a3ea82e266d0. Nov 1 00:14:02.048455 containerd[1460]: time="2025-11-01T00:14:02.048373311Z" level=info msg="StartContainer for \"bebbbccf5dd5c938f9511bbe7e3f8995fdf5a4782505be1231e2a3ea82e266d0\" returns successfully" Nov 1 00:14:02.725175 kubelet[2506]: E1101 00:14:02.725125 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:02.737432 kubelet[2506]: I1101 00:14:02.737337 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rwz9b" podStartSLOduration=2.737313426 podStartE2EDuration="2.737313426s" podCreationTimestamp="2025-11-01 00:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:14:02.737039228 +0000 UTC m=+7.164940791" watchObservedRunningTime="2025-11-01 00:14:02.737313426 +0000 UTC m=+7.165214979" Nov 1 00:14:03.516145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479747234.mount: Deactivated successfully. Nov 1 00:14:03.560974 kubelet[2506]: E1101 00:14:03.560925 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:03.729107 kubelet[2506]: E1101 00:14:03.729047 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:03.729107 kubelet[2506]: E1101 00:14:03.729056 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:04.124477 containerd[1460]: time="2025-11-01T00:14:04.124409715Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:04.125459 containerd[1460]: time="2025-11-01T00:14:04.125414463Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:14:04.126728 containerd[1460]: time="2025-11-01T00:14:04.126679051Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:04.128948 containerd[1460]: time="2025-11-01T00:14:04.128911076Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:04.129543 containerd[1460]: time="2025-11-01T00:14:04.129514766Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.210177952s" Nov 1 00:14:04.129600 containerd[1460]: time="2025-11-01T00:14:04.129545535Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:14:04.137815 containerd[1460]: time="2025-11-01T00:14:04.137775095Z" level=info msg="CreateContainer within sandbox \"87ae48397c7d784c2b46c1909eb29e1f7fdd8ab0aa0fd038e8c97856a84fa7c9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:14:04.152270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount751241046.mount: Deactivated successfully. Nov 1 00:14:04.153591 containerd[1460]: time="2025-11-01T00:14:04.153544905Z" level=info msg="CreateContainer within sandbox \"87ae48397c7d784c2b46c1909eb29e1f7fdd8ab0aa0fd038e8c97856a84fa7c9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f2ec354b680e448855df6a682c967ea0ac4607572ed2514eb5899c6cf9469cc5\"" Nov 1 00:14:04.154139 containerd[1460]: time="2025-11-01T00:14:04.154101045Z" level=info msg="StartContainer for \"f2ec354b680e448855df6a682c967ea0ac4607572ed2514eb5899c6cf9469cc5\"" Nov 1 00:14:04.190842 systemd[1]: Started cri-containerd-f2ec354b680e448855df6a682c967ea0ac4607572ed2514eb5899c6cf9469cc5.scope - libcontainer container f2ec354b680e448855df6a682c967ea0ac4607572ed2514eb5899c6cf9469cc5. Nov 1 00:14:04.222033 containerd[1460]: time="2025-11-01T00:14:04.221985720Z" level=info msg="StartContainer for \"f2ec354b680e448855df6a682c967ea0ac4607572ed2514eb5899c6cf9469cc5\" returns successfully" Nov 1 00:14:04.748589 kubelet[2506]: I1101 00:14:04.748505 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-mgtxg" podStartSLOduration=2.535277174 podStartE2EDuration="4.748428052s" podCreationTimestamp="2025-11-01 00:14:00 +0000 UTC" firstStartedPulling="2025-11-01 00:14:01.918637662 +0000 UTC m=+6.346539215" lastFinishedPulling="2025-11-01 00:14:04.13178854 +0000 UTC m=+8.559690093" observedRunningTime="2025-11-01 00:14:04.748143064 +0000 UTC m=+9.176044627" watchObservedRunningTime="2025-11-01 00:14:04.748428052 +0000 UTC m=+9.176329615" Nov 1 00:14:10.402353 sudo[1636]: pam_unix(sudo:session): session closed for user root Nov 1 00:14:10.405719 sshd[1633]: pam_unix(sshd:session): session closed for user core Nov 1 00:14:10.412339 systemd[1]: sshd@6-10.0.0.19:22-10.0.0.1:41262.service: Deactivated successfully. Nov 1 00:14:10.417829 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:14:10.418335 systemd[1]: session-7.scope: Consumed 8.698s CPU time, 163.0M memory peak, 0B memory swap peak. Nov 1 00:14:10.419370 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:14:10.421442 systemd-logind[1438]: Removed session 7. Nov 1 00:14:15.687758 systemd[1]: Created slice kubepods-besteffort-pod56f49b1e_07f7_47b6_a216_78943ad5e19b.slice - libcontainer container kubepods-besteffort-pod56f49b1e_07f7_47b6_a216_78943ad5e19b.slice. Nov 1 00:14:15.720298 kubelet[2506]: I1101 00:14:15.720221 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56f49b1e-07f7-47b6-a216-78943ad5e19b-tigera-ca-bundle\") pod \"calico-typha-68fbdd7895-hcvbr\" (UID: \"56f49b1e-07f7-47b6-a216-78943ad5e19b\") " pod="calico-system/calico-typha-68fbdd7895-hcvbr" Nov 1 00:14:15.720298 kubelet[2506]: I1101 00:14:15.720277 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/56f49b1e-07f7-47b6-a216-78943ad5e19b-typha-certs\") pod \"calico-typha-68fbdd7895-hcvbr\" (UID: \"56f49b1e-07f7-47b6-a216-78943ad5e19b\") " pod="calico-system/calico-typha-68fbdd7895-hcvbr" Nov 1 00:14:15.720298 kubelet[2506]: I1101 00:14:15.720305 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ggs5\" (UniqueName: \"kubernetes.io/projected/56f49b1e-07f7-47b6-a216-78943ad5e19b-kube-api-access-7ggs5\") pod \"calico-typha-68fbdd7895-hcvbr\" (UID: \"56f49b1e-07f7-47b6-a216-78943ad5e19b\") " pod="calico-system/calico-typha-68fbdd7895-hcvbr" Nov 1 00:14:15.863876 systemd[1]: Created slice kubepods-besteffort-podbcb40c51_e2ca_45be_b66f_4ce60549b183.slice - libcontainer container kubepods-besteffort-podbcb40c51_e2ca_45be_b66f_4ce60549b183.slice. Nov 1 00:14:15.921088 kubelet[2506]: I1101 00:14:15.921017 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bcb40c51-e2ca-45be-b66f-4ce60549b183-var-run-calico\") pod \"calico-node-rrkfj\" (UID: \"bcb40c51-e2ca-45be-b66f-4ce60549b183\") " pod="calico-system/calico-node-rrkfj" Nov 1 00:14:15.921088 kubelet[2506]: I1101 00:14:15.921071 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bcb40c51-e2ca-45be-b66f-4ce60549b183-cni-bin-dir\") pod \"calico-node-rrkfj\" (UID: \"bcb40c51-e2ca-45be-b66f-4ce60549b183\") " pod="calico-system/calico-node-rrkfj" Nov 1 00:14:15.921088 kubelet[2506]: I1101 00:14:15.921087 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcb40c51-e2ca-45be-b66f-4ce60549b183-lib-modules\") pod \"calico-node-rrkfj\" (UID: \"bcb40c51-e2ca-45be-b66f-4ce60549b183\") " pod="calico-system/calico-node-rrkfj" Nov 1 00:14:15.921088 kubelet[2506]: I1101 00:14:15.921100 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bcb40c51-e2ca-45be-b66f-4ce60549b183-var-lib-calico\") pod \"calico-node-rrkfj\" (UID: \"bcb40c51-e2ca-45be-b66f-4ce60549b183\") " pod="calico-system/calico-node-rrkfj" Nov 1 00:14:15.921391 kubelet[2506]: I1101 00:14:15.921122 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bcb40c51-e2ca-45be-b66f-4ce60549b183-cni-log-dir\") pod \"calico-node-rrkfj\" (UID: \"bcb40c51-e2ca-45be-b66f-4ce60549b183\") " pod="calico-system/calico-node-rrkfj" Nov 1 00:14:15.921391 kubelet[2506]: I1101 00:14:15.921140 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bcb40c51-e2ca-45be-b66f-4ce60549b183-cni-net-dir\") pod \"calico-node-rrkfj\" (UID: \"bcb40c51-e2ca-45be-b66f-4ce60549b183\") " pod="calico-system/calico-node-rrkfj" Nov 1 00:14:15.921391 kubelet[2506]: I1101 00:14:15.921225 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bcb40c51-e2ca-45be-b66f-4ce60549b183-flexvol-driver-host\") pod \"calico-node-rrkfj\" (UID: \"bcb40c51-e2ca-45be-b66f-4ce60549b183\") " pod="calico-system/calico-node-rrkfj" Nov 1 00:14:15.921391 kubelet[2506]: I1101 00:14:15.921301 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqrdt\" (UniqueName: \"kubernetes.io/projected/bcb40c51-e2ca-45be-b66f-4ce60549b183-kube-api-access-bqrdt\") pod \"calico-node-rrkfj\" (UID: \"bcb40c51-e2ca-45be-b66f-4ce60549b183\") " pod="calico-system/calico-node-rrkfj" Nov 1 00:14:15.921391 kubelet[2506]: I1101 00:14:15.921338 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcb40c51-e2ca-45be-b66f-4ce60549b183-xtables-lock\") pod \"calico-node-rrkfj\" (UID: \"bcb40c51-e2ca-45be-b66f-4ce60549b183\") " pod="calico-system/calico-node-rrkfj" Nov 1 00:14:15.921525 kubelet[2506]: I1101 00:14:15.921378 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bcb40c51-e2ca-45be-b66f-4ce60549b183-policysync\") pod \"calico-node-rrkfj\" (UID: \"bcb40c51-e2ca-45be-b66f-4ce60549b183\") " pod="calico-system/calico-node-rrkfj" Nov 1 00:14:15.921525 kubelet[2506]: I1101 00:14:15.921404 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcb40c51-e2ca-45be-b66f-4ce60549b183-tigera-ca-bundle\") pod \"calico-node-rrkfj\" (UID: \"bcb40c51-e2ca-45be-b66f-4ce60549b183\") " pod="calico-system/calico-node-rrkfj" Nov 1 00:14:15.921525 kubelet[2506]: I1101 00:14:15.921437 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bcb40c51-e2ca-45be-b66f-4ce60549b183-node-certs\") pod \"calico-node-rrkfj\" (UID: \"bcb40c51-e2ca-45be-b66f-4ce60549b183\") " pod="calico-system/calico-node-rrkfj" Nov 1 00:14:15.995886 kubelet[2506]: E1101 00:14:15.995750 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:15.996847 containerd[1460]: time="2025-11-01T00:14:15.996724814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68fbdd7895-hcvbr,Uid:56f49b1e-07f7-47b6-a216-78943ad5e19b,Namespace:calico-system,Attempt:0,}" Nov 1 00:14:16.025405 kubelet[2506]: E1101 00:14:16.025333 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.025405 kubelet[2506]: W1101 00:14:16.025590 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.025955 kubelet[2506]: E1101 00:14:16.025636 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.035122 kubelet[2506]: E1101 00:14:16.035071 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.035247 kubelet[2506]: W1101 00:14:16.035112 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.035297 kubelet[2506]: E1101 00:14:16.035269 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.048096 kubelet[2506]: E1101 00:14:16.048050 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.048096 kubelet[2506]: W1101 00:14:16.048082 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.048311 kubelet[2506]: E1101 00:14:16.048113 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.048747 containerd[1460]: time="2025-11-01T00:14:16.048572041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:14:16.048747 containerd[1460]: time="2025-11-01T00:14:16.048676427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:14:16.048747 containerd[1460]: time="2025-11-01T00:14:16.048710031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:16.048936 containerd[1460]: time="2025-11-01T00:14:16.048858610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:16.070588 kubelet[2506]: E1101 00:14:16.070512 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ln244" podUID="d5658fcc-61ca-4e96-9f79-25e33876cacb" Nov 1 00:14:16.090192 systemd[1]: Started cri-containerd-3a30792c59961b0b6249b149e08882747b24f61dfc7318532ce3d00db8970aaf.scope - libcontainer container 3a30792c59961b0b6249b149e08882747b24f61dfc7318532ce3d00db8970aaf. Nov 1 00:14:16.122804 kubelet[2506]: E1101 00:14:16.122633 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.123622 kubelet[2506]: W1101 00:14:16.123438 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.123622 kubelet[2506]: E1101 00:14:16.123480 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.123940 kubelet[2506]: E1101 00:14:16.123925 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.124111 kubelet[2506]: W1101 00:14:16.123993 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.124111 kubelet[2506]: E1101 00:14:16.124009 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.125342 kubelet[2506]: E1101 00:14:16.125247 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.125342 kubelet[2506]: W1101 00:14:16.125263 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.125342 kubelet[2506]: E1101 00:14:16.125278 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.126191 kubelet[2506]: E1101 00:14:16.125977 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.126191 kubelet[2506]: W1101 00:14:16.125992 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.126191 kubelet[2506]: E1101 00:14:16.126005 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.126396 kubelet[2506]: E1101 00:14:16.126378 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.126437 kubelet[2506]: W1101 00:14:16.126395 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.126437 kubelet[2506]: E1101 00:14:16.126411 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.126723 kubelet[2506]: E1101 00:14:16.126681 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.126723 kubelet[2506]: W1101 00:14:16.126718 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.126802 kubelet[2506]: E1101 00:14:16.126731 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.127050 kubelet[2506]: E1101 00:14:16.127016 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.127050 kubelet[2506]: W1101 00:14:16.127049 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.127130 kubelet[2506]: E1101 00:14:16.127068 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.128578 kubelet[2506]: E1101 00:14:16.128556 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.128578 kubelet[2506]: W1101 00:14:16.128575 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.128697 kubelet[2506]: E1101 00:14:16.128591 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.129270 kubelet[2506]: E1101 00:14:16.129251 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.129338 kubelet[2506]: W1101 00:14:16.129269 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.129338 kubelet[2506]: E1101 00:14:16.129285 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.129768 kubelet[2506]: E1101 00:14:16.129751 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.129816 kubelet[2506]: W1101 00:14:16.129769 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.129816 kubelet[2506]: E1101 00:14:16.129783 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.130804 kubelet[2506]: E1101 00:14:16.130779 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.130804 kubelet[2506]: W1101 00:14:16.130796 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.130993 kubelet[2506]: E1101 00:14:16.130810 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.131152 kubelet[2506]: E1101 00:14:16.131135 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.131220 kubelet[2506]: W1101 00:14:16.131151 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.131220 kubelet[2506]: E1101 00:14:16.131162 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.132187 kubelet[2506]: E1101 00:14:16.132165 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.132187 kubelet[2506]: W1101 00:14:16.132184 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.132343 kubelet[2506]: E1101 00:14:16.132200 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.133208 kubelet[2506]: E1101 00:14:16.132863 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.133208 kubelet[2506]: W1101 00:14:16.132886 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.133208 kubelet[2506]: E1101 00:14:16.132916 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.133373 kubelet[2506]: E1101 00:14:16.133337 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.133373 kubelet[2506]: W1101 00:14:16.133349 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.133373 kubelet[2506]: E1101 00:14:16.133362 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.135045 kubelet[2506]: E1101 00:14:16.135014 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.135045 kubelet[2506]: W1101 00:14:16.135041 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.135222 kubelet[2506]: E1101 00:14:16.135056 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.135544 kubelet[2506]: E1101 00:14:16.135522 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.135544 kubelet[2506]: W1101 00:14:16.135541 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.135738 kubelet[2506]: E1101 00:14:16.135554 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.135920 kubelet[2506]: E1101 00:14:16.135877 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.135920 kubelet[2506]: W1101 00:14:16.135897 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.135920 kubelet[2506]: E1101 00:14:16.135912 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.136957 kubelet[2506]: E1101 00:14:16.136919 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.136957 kubelet[2506]: W1101 00:14:16.136939 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.136957 kubelet[2506]: E1101 00:14:16.136956 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.137371 kubelet[2506]: E1101 00:14:16.137353 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.137442 kubelet[2506]: W1101 00:14:16.137369 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.137442 kubelet[2506]: E1101 00:14:16.137388 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.138591 kubelet[2506]: E1101 00:14:16.138562 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.138591 kubelet[2506]: W1101 00:14:16.138581 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.138671 kubelet[2506]: E1101 00:14:16.138595 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.138810 kubelet[2506]: I1101 00:14:16.138652 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d5658fcc-61ca-4e96-9f79-25e33876cacb-registration-dir\") pod \"csi-node-driver-ln244\" (UID: \"d5658fcc-61ca-4e96-9f79-25e33876cacb\") " pod="calico-system/csi-node-driver-ln244" Nov 1 00:14:16.139189 kubelet[2506]: E1101 00:14:16.139170 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.139189 kubelet[2506]: W1101 00:14:16.139188 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.139284 kubelet[2506]: E1101 00:14:16.139202 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.140377 kubelet[2506]: E1101 00:14:16.140289 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.140377 kubelet[2506]: W1101 00:14:16.140312 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.140377 kubelet[2506]: E1101 00:14:16.140330 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.142437 kubelet[2506]: E1101 00:14:16.141673 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.142437 kubelet[2506]: W1101 00:14:16.142314 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.142437 kubelet[2506]: E1101 00:14:16.142338 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.142437 kubelet[2506]: I1101 00:14:16.142394 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d5658fcc-61ca-4e96-9f79-25e33876cacb-socket-dir\") pod \"csi-node-driver-ln244\" (UID: \"d5658fcc-61ca-4e96-9f79-25e33876cacb\") " pod="calico-system/csi-node-driver-ln244" Nov 1 00:14:16.143354 kubelet[2506]: E1101 00:14:16.143173 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.143354 kubelet[2506]: W1101 00:14:16.143191 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.143354 kubelet[2506]: E1101 00:14:16.143206 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.143879 kubelet[2506]: E1101 00:14:16.143676 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.143879 kubelet[2506]: W1101 00:14:16.143740 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.143879 kubelet[2506]: E1101 00:14:16.143755 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.144252 kubelet[2506]: E1101 00:14:16.144202 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.144252 kubelet[2506]: W1101 00:14:16.144218 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.144252 kubelet[2506]: E1101 00:14:16.144230 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.144616 kubelet[2506]: I1101 00:14:16.144444 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5658fcc-61ca-4e96-9f79-25e33876cacb-kubelet-dir\") pod \"csi-node-driver-ln244\" (UID: \"d5658fcc-61ca-4e96-9f79-25e33876cacb\") " pod="calico-system/csi-node-driver-ln244" Nov 1 00:14:16.145151 kubelet[2506]: E1101 00:14:16.144987 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.145151 kubelet[2506]: W1101 00:14:16.145009 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.145151 kubelet[2506]: E1101 00:14:16.145037 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.145504 kubelet[2506]: E1101 00:14:16.145487 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.145618 kubelet[2506]: W1101 00:14:16.145568 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.145618 kubelet[2506]: E1101 00:14:16.145588 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.150789 kubelet[2506]: E1101 00:14:16.150755 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.151130 kubelet[2506]: W1101 00:14:16.150988 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.151130 kubelet[2506]: E1101 00:14:16.151034 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.151130 kubelet[2506]: I1101 00:14:16.151093 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qsmx\" (UniqueName: \"kubernetes.io/projected/d5658fcc-61ca-4e96-9f79-25e33876cacb-kube-api-access-4qsmx\") pod \"csi-node-driver-ln244\" (UID: \"d5658fcc-61ca-4e96-9f79-25e33876cacb\") " pod="calico-system/csi-node-driver-ln244" Nov 1 00:14:16.151945 kubelet[2506]: E1101 00:14:16.151835 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.151945 kubelet[2506]: W1101 00:14:16.151853 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.151945 kubelet[2506]: E1101 00:14:16.151868 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.152521 kubelet[2506]: E1101 00:14:16.152406 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.152521 kubelet[2506]: W1101 00:14:16.152421 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.152521 kubelet[2506]: E1101 00:14:16.152437 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.153229 kubelet[2506]: E1101 00:14:16.152986 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.153229 kubelet[2506]: W1101 00:14:16.153000 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.153229 kubelet[2506]: E1101 00:14:16.153013 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.153229 kubelet[2506]: I1101 00:14:16.153051 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d5658fcc-61ca-4e96-9f79-25e33876cacb-varrun\") pod \"csi-node-driver-ln244\" (UID: \"d5658fcc-61ca-4e96-9f79-25e33876cacb\") " pod="calico-system/csi-node-driver-ln244" Nov 1 00:14:16.153506 kubelet[2506]: E1101 00:14:16.153490 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.153671 kubelet[2506]: W1101 00:14:16.153589 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.153671 kubelet[2506]: E1101 00:14:16.153609 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.154169 kubelet[2506]: E1101 00:14:16.154106 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.154169 kubelet[2506]: W1101 00:14:16.154121 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.154169 kubelet[2506]: E1101 00:14:16.154134 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.162862 containerd[1460]: time="2025-11-01T00:14:16.162793765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68fbdd7895-hcvbr,Uid:56f49b1e-07f7-47b6-a216-78943ad5e19b,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a30792c59961b0b6249b149e08882747b24f61dfc7318532ce3d00db8970aaf\"" Nov 1 00:14:16.164915 kubelet[2506]: E1101 00:14:16.164106 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:16.166836 containerd[1460]: time="2025-11-01T00:14:16.165856176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:14:16.177537 kubelet[2506]: E1101 00:14:16.177474 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:16.178380 containerd[1460]: time="2025-11-01T00:14:16.178311539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rrkfj,Uid:bcb40c51-e2ca-45be-b66f-4ce60549b183,Namespace:calico-system,Attempt:0,}" Nov 1 00:14:16.215897 containerd[1460]: time="2025-11-01T00:14:16.215743563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:14:16.215897 containerd[1460]: time="2025-11-01T00:14:16.215826449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:14:16.215897 containerd[1460]: time="2025-11-01T00:14:16.215846287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:16.216133 containerd[1460]: time="2025-11-01T00:14:16.215958728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:16.240060 systemd[1]: Started cri-containerd-4816a61361446a195560714c2a023af5c2ba328e93f10813de8536641e46235a.scope - libcontainer container 4816a61361446a195560714c2a023af5c2ba328e93f10813de8536641e46235a. Nov 1 00:14:16.254594 kubelet[2506]: E1101 00:14:16.254332 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.254594 kubelet[2506]: W1101 00:14:16.254364 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.254594 kubelet[2506]: E1101 00:14:16.254398 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.256380 kubelet[2506]: E1101 00:14:16.256207 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.256380 kubelet[2506]: W1101 00:14:16.256245 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.256380 kubelet[2506]: E1101 00:14:16.256284 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.256751 kubelet[2506]: E1101 00:14:16.256680 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.256825 kubelet[2506]: W1101 00:14:16.256748 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.256825 kubelet[2506]: E1101 00:14:16.256808 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.257289 kubelet[2506]: E1101 00:14:16.257233 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.257289 kubelet[2506]: W1101 00:14:16.257247 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.257289 kubelet[2506]: E1101 00:14:16.257258 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.257549 kubelet[2506]: E1101 00:14:16.257502 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.257549 kubelet[2506]: W1101 00:14:16.257517 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.257549 kubelet[2506]: E1101 00:14:16.257527 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.258582 kubelet[2506]: E1101 00:14:16.258300 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.258582 kubelet[2506]: W1101 00:14:16.258314 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.258582 kubelet[2506]: E1101 00:14:16.258325 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.258939 kubelet[2506]: E1101 00:14:16.258897 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.258939 kubelet[2506]: W1101 00:14:16.258909 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.258939 kubelet[2506]: E1101 00:14:16.258925 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.259468 kubelet[2506]: E1101 00:14:16.259367 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.259468 kubelet[2506]: W1101 00:14:16.259393 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.259468 kubelet[2506]: E1101 00:14:16.259403 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.259950 kubelet[2506]: E1101 00:14:16.259848 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.259950 kubelet[2506]: W1101 00:14:16.259860 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.259950 kubelet[2506]: E1101 00:14:16.259870 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.260429 kubelet[2506]: E1101 00:14:16.260322 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.260429 kubelet[2506]: W1101 00:14:16.260335 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.260429 kubelet[2506]: E1101 00:14:16.260346 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.261398 kubelet[2506]: E1101 00:14:16.261278 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.261398 kubelet[2506]: W1101 00:14:16.261321 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.261398 kubelet[2506]: E1101 00:14:16.261351 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.262947 kubelet[2506]: E1101 00:14:16.262259 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.262947 kubelet[2506]: W1101 00:14:16.262322 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.262947 kubelet[2506]: E1101 00:14:16.262334 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.263454 kubelet[2506]: E1101 00:14:16.263412 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.263454 kubelet[2506]: W1101 00:14:16.263425 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.263454 kubelet[2506]: E1101 00:14:16.263438 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.264086 kubelet[2506]: E1101 00:14:16.263990 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.264086 kubelet[2506]: W1101 00:14:16.264005 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.264086 kubelet[2506]: E1101 00:14:16.264018 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.265795 kubelet[2506]: E1101 00:14:16.265736 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.265795 kubelet[2506]: W1101 00:14:16.265749 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.265795 kubelet[2506]: E1101 00:14:16.265762 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.266281 kubelet[2506]: E1101 00:14:16.266267 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.266375 kubelet[2506]: W1101 00:14:16.266362 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.266444 kubelet[2506]: E1101 00:14:16.266419 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.270783 kubelet[2506]: E1101 00:14:16.270087 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.270783 kubelet[2506]: W1101 00:14:16.270134 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.270783 kubelet[2506]: E1101 00:14:16.270180 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.270783 kubelet[2506]: E1101 00:14:16.270616 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.270783 kubelet[2506]: W1101 00:14:16.270639 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.270783 kubelet[2506]: E1101 00:14:16.270660 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.272429 kubelet[2506]: E1101 00:14:16.272188 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.272429 kubelet[2506]: W1101 00:14:16.272223 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.272429 kubelet[2506]: E1101 00:14:16.272249 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.277563 kubelet[2506]: E1101 00:14:16.277508 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.277563 kubelet[2506]: W1101 00:14:16.277547 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.277743 kubelet[2506]: E1101 00:14:16.277584 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.278117 kubelet[2506]: E1101 00:14:16.278091 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.278222 kubelet[2506]: W1101 00:14:16.278176 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.278310 kubelet[2506]: E1101 00:14:16.278294 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.278639 kubelet[2506]: E1101 00:14:16.278625 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.278725 kubelet[2506]: W1101 00:14:16.278711 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.278817 kubelet[2506]: E1101 00:14:16.278799 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.279933 containerd[1460]: time="2025-11-01T00:14:16.279097866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rrkfj,Uid:bcb40c51-e2ca-45be-b66f-4ce60549b183,Namespace:calico-system,Attempt:0,} returns sandbox id \"4816a61361446a195560714c2a023af5c2ba328e93f10813de8536641e46235a\"" Nov 1 00:14:16.280617 kubelet[2506]: E1101 00:14:16.280299 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.280617 kubelet[2506]: W1101 00:14:16.280325 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.280617 kubelet[2506]: E1101 00:14:16.280340 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.281571 kubelet[2506]: E1101 00:14:16.281534 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.281612 kubelet[2506]: W1101 00:14:16.281571 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.281612 kubelet[2506]: E1101 00:14:16.281599 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.282679 kubelet[2506]: E1101 00:14:16.282655 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.282679 kubelet[2506]: W1101 00:14:16.282679 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.282794 kubelet[2506]: E1101 00:14:16.282727 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:16.284125 kubelet[2506]: E1101 00:14:16.284084 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:16.287799 kubelet[2506]: E1101 00:14:16.287779 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:16.287986 kubelet[2506]: W1101 00:14:16.287913 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:16.287986 kubelet[2506]: E1101 00:14:16.287946 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:17.699422 kubelet[2506]: E1101 00:14:17.699304 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ln244" podUID="d5658fcc-61ca-4e96-9f79-25e33876cacb" Nov 1 00:14:17.964365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1068827809.mount: Deactivated successfully. Nov 1 00:14:18.945492 containerd[1460]: time="2025-11-01T00:14:18.945419755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:18.946962 containerd[1460]: time="2025-11-01T00:14:18.946902123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:14:18.948471 containerd[1460]: time="2025-11-01T00:14:18.948417373Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:18.951586 containerd[1460]: time="2025-11-01T00:14:18.951537792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:18.952437 containerd[1460]: time="2025-11-01T00:14:18.952369666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.785532624s" Nov 1 00:14:18.952437 containerd[1460]: time="2025-11-01T00:14:18.952427775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:14:18.953808 containerd[1460]: time="2025-11-01T00:14:18.953736196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:14:18.969757 containerd[1460]: time="2025-11-01T00:14:18.968246665Z" level=info msg="CreateContainer within sandbox \"3a30792c59961b0b6249b149e08882747b24f61dfc7318532ce3d00db8970aaf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:14:18.993592 containerd[1460]: time="2025-11-01T00:14:18.993532859Z" level=info msg="CreateContainer within sandbox \"3a30792c59961b0b6249b149e08882747b24f61dfc7318532ce3d00db8970aaf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0738f4cac351eaef7865b00b79f40f38c86040ecf343f0d9e94c51bf212075e4\"" Nov 1 00:14:18.994054 containerd[1460]: time="2025-11-01T00:14:18.994026097Z" level=info msg="StartContainer for \"0738f4cac351eaef7865b00b79f40f38c86040ecf343f0d9e94c51bf212075e4\"" Nov 1 00:14:19.040018 systemd[1]: Started cri-containerd-0738f4cac351eaef7865b00b79f40f38c86040ecf343f0d9e94c51bf212075e4.scope - libcontainer container 0738f4cac351eaef7865b00b79f40f38c86040ecf343f0d9e94c51bf212075e4. Nov 1 00:14:19.096678 containerd[1460]: time="2025-11-01T00:14:19.096618456Z" level=info msg="StartContainer for \"0738f4cac351eaef7865b00b79f40f38c86040ecf343f0d9e94c51bf212075e4\" returns successfully" Nov 1 00:14:19.695937 kubelet[2506]: E1101 00:14:19.695869 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ln244" podUID="d5658fcc-61ca-4e96-9f79-25e33876cacb" Nov 1 00:14:19.776797 kubelet[2506]: E1101 00:14:19.776754 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:19.863946 kubelet[2506]: E1101 00:14:19.863893 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.863946 kubelet[2506]: W1101 00:14:19.863920 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.863946 kubelet[2506]: E1101 00:14:19.863944 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.864237 kubelet[2506]: E1101 00:14:19.864153 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.864237 kubelet[2506]: W1101 00:14:19.864161 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.864237 kubelet[2506]: E1101 00:14:19.864170 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.864504 kubelet[2506]: E1101 00:14:19.864474 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.864504 kubelet[2506]: W1101 00:14:19.864487 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.864504 kubelet[2506]: E1101 00:14:19.864495 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.864808 kubelet[2506]: E1101 00:14:19.864783 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.864808 kubelet[2506]: W1101 00:14:19.864795 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.864808 kubelet[2506]: E1101 00:14:19.864805 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.865085 kubelet[2506]: E1101 00:14:19.865034 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.865085 kubelet[2506]: W1101 00:14:19.865048 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.865085 kubelet[2506]: E1101 00:14:19.865057 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.865352 kubelet[2506]: E1101 00:14:19.865258 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.865352 kubelet[2506]: W1101 00:14:19.865270 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.865352 kubelet[2506]: E1101 00:14:19.865280 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.865515 kubelet[2506]: E1101 00:14:19.865497 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.865515 kubelet[2506]: W1101 00:14:19.865508 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.865583 kubelet[2506]: E1101 00:14:19.865518 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.865771 kubelet[2506]: E1101 00:14:19.865753 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.865771 kubelet[2506]: W1101 00:14:19.865770 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.865843 kubelet[2506]: E1101 00:14:19.865781 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.866055 kubelet[2506]: E1101 00:14:19.866025 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.866055 kubelet[2506]: W1101 00:14:19.866039 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.866055 kubelet[2506]: E1101 00:14:19.866051 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.866308 kubelet[2506]: E1101 00:14:19.866288 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.866308 kubelet[2506]: W1101 00:14:19.866301 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.866387 kubelet[2506]: E1101 00:14:19.866313 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.866572 kubelet[2506]: E1101 00:14:19.866556 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.866572 kubelet[2506]: W1101 00:14:19.866570 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.866644 kubelet[2506]: E1101 00:14:19.866581 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.866867 kubelet[2506]: E1101 00:14:19.866848 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.866867 kubelet[2506]: W1101 00:14:19.866862 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.866945 kubelet[2506]: E1101 00:14:19.866874 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.867117 kubelet[2506]: E1101 00:14:19.867099 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.867117 kubelet[2506]: W1101 00:14:19.867112 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.867194 kubelet[2506]: E1101 00:14:19.867121 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.867394 kubelet[2506]: E1101 00:14:19.867375 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.867394 kubelet[2506]: W1101 00:14:19.867387 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.867394 kubelet[2506]: E1101 00:14:19.867396 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.867627 kubelet[2506]: E1101 00:14:19.867613 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.867627 kubelet[2506]: W1101 00:14:19.867623 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.867741 kubelet[2506]: E1101 00:14:19.867631 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.894296 kubelet[2506]: E1101 00:14:19.894229 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.894296 kubelet[2506]: W1101 00:14:19.894254 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.894296 kubelet[2506]: E1101 00:14:19.894275 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.894586 kubelet[2506]: E1101 00:14:19.894559 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.894586 kubelet[2506]: W1101 00:14:19.894572 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.894586 kubelet[2506]: E1101 00:14:19.894581 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.894882 kubelet[2506]: E1101 00:14:19.894857 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.894882 kubelet[2506]: W1101 00:14:19.894869 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.894882 kubelet[2506]: E1101 00:14:19.894878 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.895345 kubelet[2506]: E1101 00:14:19.895295 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.895345 kubelet[2506]: W1101 00:14:19.895329 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.895417 kubelet[2506]: E1101 00:14:19.895355 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.895659 kubelet[2506]: E1101 00:14:19.895634 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.895659 kubelet[2506]: W1101 00:14:19.895650 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.895730 kubelet[2506]: E1101 00:14:19.895661 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.895945 kubelet[2506]: E1101 00:14:19.895922 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.895945 kubelet[2506]: W1101 00:14:19.895935 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.896000 kubelet[2506]: E1101 00:14:19.895945 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.896228 kubelet[2506]: E1101 00:14:19.896203 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.896228 kubelet[2506]: W1101 00:14:19.896217 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.896277 kubelet[2506]: E1101 00:14:19.896227 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.896495 kubelet[2506]: E1101 00:14:19.896473 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.896495 kubelet[2506]: W1101 00:14:19.896486 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.896543 kubelet[2506]: E1101 00:14:19.896498 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.896778 kubelet[2506]: E1101 00:14:19.896762 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.896778 kubelet[2506]: W1101 00:14:19.896774 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.896842 kubelet[2506]: E1101 00:14:19.896785 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.897180 kubelet[2506]: E1101 00:14:19.897149 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.897180 kubelet[2506]: W1101 00:14:19.897167 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.897180 kubelet[2506]: E1101 00:14:19.897179 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.897440 kubelet[2506]: E1101 00:14:19.897418 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.897440 kubelet[2506]: W1101 00:14:19.897431 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.897440 kubelet[2506]: E1101 00:14:19.897442 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.897733 kubelet[2506]: E1101 00:14:19.897707 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.897733 kubelet[2506]: W1101 00:14:19.897723 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.897794 kubelet[2506]: E1101 00:14:19.897733 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.898116 kubelet[2506]: E1101 00:14:19.898077 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.898116 kubelet[2506]: W1101 00:14:19.898097 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.898116 kubelet[2506]: E1101 00:14:19.898111 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.898365 kubelet[2506]: E1101 00:14:19.898336 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.898365 kubelet[2506]: W1101 00:14:19.898348 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.898365 kubelet[2506]: E1101 00:14:19.898356 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.898641 kubelet[2506]: E1101 00:14:19.898614 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.898641 kubelet[2506]: W1101 00:14:19.898627 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.898641 kubelet[2506]: E1101 00:14:19.898639 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.898907 kubelet[2506]: E1101 00:14:19.898888 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.898907 kubelet[2506]: W1101 00:14:19.898900 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.898907 kubelet[2506]: E1101 00:14:19.898909 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.899158 kubelet[2506]: E1101 00:14:19.899137 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.899158 kubelet[2506]: W1101 00:14:19.899148 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.899158 kubelet[2506]: E1101 00:14:19.899157 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:19.899400 kubelet[2506]: E1101 00:14:19.899379 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:19.899400 kubelet[2506]: W1101 00:14:19.899389 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:19.899400 kubelet[2506]: E1101 00:14:19.899398 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.778433 kubelet[2506]: I1101 00:14:20.778396 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:14:20.778921 kubelet[2506]: E1101 00:14:20.778803 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:20.816852 containerd[1460]: time="2025-11-01T00:14:20.816778438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:20.817712 containerd[1460]: time="2025-11-01T00:14:20.817630740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:14:20.819403 containerd[1460]: time="2025-11-01T00:14:20.819363598Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:20.822194 containerd[1460]: time="2025-11-01T00:14:20.822160356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:20.822811 containerd[1460]: time="2025-11-01T00:14:20.822785391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.868999391s" Nov 1 00:14:20.822873 containerd[1460]: time="2025-11-01T00:14:20.822815367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:14:20.829312 containerd[1460]: time="2025-11-01T00:14:20.829262197Z" level=info msg="CreateContainer within sandbox \"4816a61361446a195560714c2a023af5c2ba328e93f10813de8536641e46235a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:14:20.847967 containerd[1460]: time="2025-11-01T00:14:20.847900207Z" level=info msg="CreateContainer within sandbox \"4816a61361446a195560714c2a023af5c2ba328e93f10813de8536641e46235a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fcfa473bee1216234d84f39e5cf6f691232fba65c022f65f820f40b895eedf49\"" Nov 1 00:14:20.848728 containerd[1460]: time="2025-11-01T00:14:20.848479135Z" level=info msg="StartContainer for \"fcfa473bee1216234d84f39e5cf6f691232fba65c022f65f820f40b895eedf49\"" Nov 1 00:14:20.875242 kubelet[2506]: E1101 00:14:20.874644 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.875242 kubelet[2506]: W1101 00:14:20.874699 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.875242 kubelet[2506]: E1101 00:14:20.874734 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.875242 kubelet[2506]: E1101 00:14:20.875080 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.875242 kubelet[2506]: W1101 00:14:20.875095 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.875242 kubelet[2506]: E1101 00:14:20.875107 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.875859 kubelet[2506]: E1101 00:14:20.875404 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.875859 kubelet[2506]: W1101 00:14:20.875415 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.875859 kubelet[2506]: E1101 00:14:20.875425 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.875859 kubelet[2506]: E1101 00:14:20.875702 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.875859 kubelet[2506]: W1101 00:14:20.875712 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.875859 kubelet[2506]: E1101 00:14:20.875723 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.876088 kubelet[2506]: E1101 00:14:20.876029 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.876088 kubelet[2506]: W1101 00:14:20.876039 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.876088 kubelet[2506]: E1101 00:14:20.876049 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.876334 kubelet[2506]: E1101 00:14:20.876318 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.876334 kubelet[2506]: W1101 00:14:20.876331 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.876334 kubelet[2506]: E1101 00:14:20.876341 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.881622 kubelet[2506]: E1101 00:14:20.881570 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.881622 kubelet[2506]: W1101 00:14:20.881596 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.881622 kubelet[2506]: E1101 00:14:20.881617 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.883150 kubelet[2506]: E1101 00:14:20.882045 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.883150 kubelet[2506]: W1101 00:14:20.882058 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.883150 kubelet[2506]: E1101 00:14:20.882073 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.883150 kubelet[2506]: E1101 00:14:20.882541 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.883150 kubelet[2506]: W1101 00:14:20.882550 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.883150 kubelet[2506]: E1101 00:14:20.882560 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.884787 kubelet[2506]: E1101 00:14:20.884761 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.884787 kubelet[2506]: W1101 00:14:20.884782 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.884893 kubelet[2506]: E1101 00:14:20.884798 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.886182 kubelet[2506]: E1101 00:14:20.885074 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.886182 kubelet[2506]: W1101 00:14:20.885089 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.886182 kubelet[2506]: E1101 00:14:20.885101 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.886182 kubelet[2506]: E1101 00:14:20.885324 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.886182 kubelet[2506]: W1101 00:14:20.885335 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.886182 kubelet[2506]: E1101 00:14:20.885347 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.886182 kubelet[2506]: E1101 00:14:20.885565 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.886182 kubelet[2506]: W1101 00:14:20.885575 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.886182 kubelet[2506]: E1101 00:14:20.885586 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.886182 kubelet[2506]: E1101 00:14:20.885811 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.886481 kubelet[2506]: W1101 00:14:20.885821 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.886481 kubelet[2506]: E1101 00:14:20.885842 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.886375 systemd[1]: Started cri-containerd-fcfa473bee1216234d84f39e5cf6f691232fba65c022f65f820f40b895eedf49.scope - libcontainer container fcfa473bee1216234d84f39e5cf6f691232fba65c022f65f820f40b895eedf49. Nov 1 00:14:20.886989 kubelet[2506]: E1101 00:14:20.886968 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.886989 kubelet[2506]: W1101 00:14:20.886985 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.887076 kubelet[2506]: E1101 00:14:20.886999 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.901561 kubelet[2506]: E1101 00:14:20.901523 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.901561 kubelet[2506]: W1101 00:14:20.901548 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.901790 kubelet[2506]: E1101 00:14:20.901573 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.901973 kubelet[2506]: E1101 00:14:20.901955 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.901973 kubelet[2506]: W1101 00:14:20.901970 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.902068 kubelet[2506]: E1101 00:14:20.901982 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.902291 kubelet[2506]: E1101 00:14:20.902271 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.902291 kubelet[2506]: W1101 00:14:20.902286 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.902397 kubelet[2506]: E1101 00:14:20.902297 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.902610 kubelet[2506]: E1101 00:14:20.902593 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.902610 kubelet[2506]: W1101 00:14:20.902607 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.902744 kubelet[2506]: E1101 00:14:20.902620 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.902915 kubelet[2506]: E1101 00:14:20.902901 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.902915 kubelet[2506]: W1101 00:14:20.902914 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.903043 kubelet[2506]: E1101 00:14:20.902927 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.903220 kubelet[2506]: E1101 00:14:20.903203 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.903220 kubelet[2506]: W1101 00:14:20.903216 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.903306 kubelet[2506]: E1101 00:14:20.903252 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.903555 kubelet[2506]: E1101 00:14:20.903538 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.903555 kubelet[2506]: W1101 00:14:20.903551 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.903656 kubelet[2506]: E1101 00:14:20.903563 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.903961 kubelet[2506]: E1101 00:14:20.903930 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.904033 kubelet[2506]: W1101 00:14:20.903963 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.904033 kubelet[2506]: E1101 00:14:20.903979 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.904264 kubelet[2506]: E1101 00:14:20.904243 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.904331 kubelet[2506]: W1101 00:14:20.904270 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.904331 kubelet[2506]: E1101 00:14:20.904283 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.904576 kubelet[2506]: E1101 00:14:20.904558 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.904576 kubelet[2506]: W1101 00:14:20.904571 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.904666 kubelet[2506]: E1101 00:14:20.904584 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.904978 kubelet[2506]: E1101 00:14:20.904947 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.904978 kubelet[2506]: W1101 00:14:20.904974 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.905080 kubelet[2506]: E1101 00:14:20.905041 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.905400 kubelet[2506]: E1101 00:14:20.905383 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.905400 kubelet[2506]: W1101 00:14:20.905396 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.905494 kubelet[2506]: E1101 00:14:20.905409 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.905697 kubelet[2506]: E1101 00:14:20.905669 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.905851 kubelet[2506]: W1101 00:14:20.905702 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.905851 kubelet[2506]: E1101 00:14:20.905730 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.906034 kubelet[2506]: E1101 00:14:20.906017 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.906034 kubelet[2506]: W1101 00:14:20.906031 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.906117 kubelet[2506]: E1101 00:14:20.906043 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.906273 kubelet[2506]: E1101 00:14:20.906257 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.906273 kubelet[2506]: W1101 00:14:20.906270 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.906363 kubelet[2506]: E1101 00:14:20.906281 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.906781 kubelet[2506]: E1101 00:14:20.906763 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.906781 kubelet[2506]: W1101 00:14:20.906778 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.906872 kubelet[2506]: E1101 00:14:20.906791 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.907380 kubelet[2506]: E1101 00:14:20.907359 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.907441 kubelet[2506]: W1101 00:14:20.907380 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.907441 kubelet[2506]: E1101 00:14:20.907401 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.907999 kubelet[2506]: E1101 00:14:20.907970 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:14:20.907999 kubelet[2506]: W1101 00:14:20.907991 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:14:20.908086 kubelet[2506]: E1101 00:14:20.908011 2506 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:14:20.934160 containerd[1460]: time="2025-11-01T00:14:20.934087600Z" level=info msg="StartContainer for \"fcfa473bee1216234d84f39e5cf6f691232fba65c022f65f820f40b895eedf49\" returns successfully" Nov 1 00:14:20.940437 systemd[1]: cri-containerd-fcfa473bee1216234d84f39e5cf6f691232fba65c022f65f820f40b895eedf49.scope: Deactivated successfully. Nov 1 00:14:20.970325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcfa473bee1216234d84f39e5cf6f691232fba65c022f65f820f40b895eedf49-rootfs.mount: Deactivated successfully. Nov 1 00:14:21.476314 containerd[1460]: time="2025-11-01T00:14:21.472828570Z" level=info msg="shim disconnected" id=fcfa473bee1216234d84f39e5cf6f691232fba65c022f65f820f40b895eedf49 namespace=k8s.io Nov 1 00:14:21.476314 containerd[1460]: time="2025-11-01T00:14:21.476306357Z" level=warning msg="cleaning up after shim disconnected" id=fcfa473bee1216234d84f39e5cf6f691232fba65c022f65f820f40b895eedf49 namespace=k8s.io Nov 1 00:14:21.476314 containerd[1460]: time="2025-11-01T00:14:21.476332226Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:14:21.696540 kubelet[2506]: E1101 00:14:21.696449 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ln244" podUID="d5658fcc-61ca-4e96-9f79-25e33876cacb" Nov 1 00:14:21.783587 kubelet[2506]: E1101 00:14:21.783284 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:21.784588 containerd[1460]: time="2025-11-01T00:14:21.784533667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:14:21.855166 kubelet[2506]: I1101 00:14:21.850025 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68fbdd7895-hcvbr" podStartSLOduration=4.062022552 podStartE2EDuration="6.850005075s" podCreationTimestamp="2025-11-01 00:14:15 +0000 UTC" firstStartedPulling="2025-11-01 00:14:16.165439763 +0000 UTC m=+20.593341316" lastFinishedPulling="2025-11-01 00:14:18.953422276 +0000 UTC m=+23.381323839" observedRunningTime="2025-11-01 00:14:20.20048921 +0000 UTC m=+24.628390763" watchObservedRunningTime="2025-11-01 00:14:21.850005075 +0000 UTC m=+26.277906628" Nov 1 00:14:23.696502 kubelet[2506]: E1101 00:14:23.696432 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ln244" podUID="d5658fcc-61ca-4e96-9f79-25e33876cacb" Nov 1 00:14:25.696574 kubelet[2506]: E1101 00:14:25.696510 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ln244" podUID="d5658fcc-61ca-4e96-9f79-25e33876cacb" Nov 1 00:14:25.758702 containerd[1460]: time="2025-11-01T00:14:25.758641361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:25.759731 containerd[1460]: time="2025-11-01T00:14:25.759663001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:14:25.761271 containerd[1460]: time="2025-11-01T00:14:25.761232750Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:25.765007 containerd[1460]: time="2025-11-01T00:14:25.764951336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:25.765665 containerd[1460]: time="2025-11-01T00:14:25.765618109Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.981032794s" Nov 1 00:14:25.765665 containerd[1460]: time="2025-11-01T00:14:25.765649257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:14:25.771606 containerd[1460]: time="2025-11-01T00:14:25.771551596Z" level=info msg="CreateContainer within sandbox \"4816a61361446a195560714c2a023af5c2ba328e93f10813de8536641e46235a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:14:25.791873 containerd[1460]: time="2025-11-01T00:14:25.791807110Z" level=info msg="CreateContainer within sandbox \"4816a61361446a195560714c2a023af5c2ba328e93f10813de8536641e46235a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3a6c9942d56aef48a79970103ec004c909f60f02266e04513a35aefdea547aa0\"" Nov 1 00:14:25.792397 containerd[1460]: time="2025-11-01T00:14:25.792351193Z" level=info msg="StartContainer for \"3a6c9942d56aef48a79970103ec004c909f60f02266e04513a35aefdea547aa0\"" Nov 1 00:14:25.829923 systemd[1]: Started cri-containerd-3a6c9942d56aef48a79970103ec004c909f60f02266e04513a35aefdea547aa0.scope - libcontainer container 3a6c9942d56aef48a79970103ec004c909f60f02266e04513a35aefdea547aa0. Nov 1 00:14:25.869037 containerd[1460]: time="2025-11-01T00:14:25.868981440Z" level=info msg="StartContainer for \"3a6c9942d56aef48a79970103ec004c909f60f02266e04513a35aefdea547aa0\" returns successfully" Nov 1 00:14:26.795086 kubelet[2506]: E1101 00:14:26.795035 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:26.850668 kubelet[2506]: I1101 00:14:26.850599 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:14:26.851196 kubelet[2506]: E1101 00:14:26.850993 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:27.135412 systemd[1]: cri-containerd-3a6c9942d56aef48a79970103ec004c909f60f02266e04513a35aefdea547aa0.scope: Deactivated successfully. Nov 1 00:14:27.160842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a6c9942d56aef48a79970103ec004c909f60f02266e04513a35aefdea547aa0-rootfs.mount: Deactivated successfully. Nov 1 00:14:27.221214 kubelet[2506]: I1101 00:14:27.221170 2506 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 00:14:27.522960 containerd[1460]: time="2025-11-01T00:14:27.522827087Z" level=info msg="shim disconnected" id=3a6c9942d56aef48a79970103ec004c909f60f02266e04513a35aefdea547aa0 namespace=k8s.io Nov 1 00:14:27.522960 containerd[1460]: time="2025-11-01T00:14:27.522908260Z" level=warning msg="cleaning up after shim disconnected" id=3a6c9942d56aef48a79970103ec004c909f60f02266e04513a35aefdea547aa0 namespace=k8s.io Nov 1 00:14:27.522960 containerd[1460]: time="2025-11-01T00:14:27.522931794Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:14:27.541045 systemd[1]: Created slice kubepods-burstable-pod6846208b_d846_430d_8df4_ccfb42c456d3.slice - libcontainer container kubepods-burstable-pod6846208b_d846_430d_8df4_ccfb42c456d3.slice. Nov 1 00:14:27.544754 systemd[1]: Created slice kubepods-besteffort-pod88e24afd_1301_4e45_96e8_67af65d033d0.slice - libcontainer container kubepods-besteffort-pod88e24afd_1301_4e45_96e8_67af65d033d0.slice. Nov 1 00:14:27.555633 kubelet[2506]: I1101 00:14:27.555431 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88e24afd-1301-4e45-96e8-67af65d033d0-whisker-ca-bundle\") pod \"whisker-5cdc56d7f5-vs2fv\" (UID: \"88e24afd-1301-4e45-96e8-67af65d033d0\") " pod="calico-system/whisker-5cdc56d7f5-vs2fv" Nov 1 00:14:27.555633 kubelet[2506]: I1101 00:14:27.555468 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klhb9\" (UniqueName: \"kubernetes.io/projected/88e24afd-1301-4e45-96e8-67af65d033d0-kube-api-access-klhb9\") pod \"whisker-5cdc56d7f5-vs2fv\" (UID: \"88e24afd-1301-4e45-96e8-67af65d033d0\") " pod="calico-system/whisker-5cdc56d7f5-vs2fv" Nov 1 00:14:27.555633 kubelet[2506]: I1101 00:14:27.555491 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42d6452e-a1e5-4daf-80fd-e1f205f5b03a-calico-apiserver-certs\") pod \"calico-apiserver-b56b4988b-mnxsg\" (UID: \"42d6452e-a1e5-4daf-80fd-e1f205f5b03a\") " pod="calico-apiserver/calico-apiserver-b56b4988b-mnxsg" Nov 1 00:14:27.555633 kubelet[2506]: I1101 00:14:27.555511 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/88e24afd-1301-4e45-96e8-67af65d033d0-whisker-backend-key-pair\") pod \"whisker-5cdc56d7f5-vs2fv\" (UID: \"88e24afd-1301-4e45-96e8-67af65d033d0\") " pod="calico-system/whisker-5cdc56d7f5-vs2fv" Nov 1 00:14:27.555633 kubelet[2506]: I1101 00:14:27.555525 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lcxl\" (UniqueName: \"kubernetes.io/projected/6846208b-d846-430d-8df4-ccfb42c456d3-kube-api-access-8lcxl\") pod \"coredns-66bc5c9577-vw47h\" (UID: \"6846208b-d846-430d-8df4-ccfb42c456d3\") " pod="kube-system/coredns-66bc5c9577-vw47h" Nov 1 00:14:27.555925 kubelet[2506]: I1101 00:14:27.555546 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58qbc\" (UniqueName: \"kubernetes.io/projected/42d6452e-a1e5-4daf-80fd-e1f205f5b03a-kube-api-access-58qbc\") pod \"calico-apiserver-b56b4988b-mnxsg\" (UID: \"42d6452e-a1e5-4daf-80fd-e1f205f5b03a\") " pod="calico-apiserver/calico-apiserver-b56b4988b-mnxsg" Nov 1 00:14:27.555925 kubelet[2506]: I1101 00:14:27.555562 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6846208b-d846-430d-8df4-ccfb42c456d3-config-volume\") pod \"coredns-66bc5c9577-vw47h\" (UID: \"6846208b-d846-430d-8df4-ccfb42c456d3\") " pod="kube-system/coredns-66bc5c9577-vw47h" Nov 1 00:14:27.561078 systemd[1]: Created slice kubepods-besteffort-pod42d6452e_a1e5_4daf_80fd_e1f205f5b03a.slice - libcontainer container kubepods-besteffort-pod42d6452e_a1e5_4daf_80fd_e1f205f5b03a.slice. Nov 1 00:14:27.580336 systemd[1]: Created slice kubepods-besteffort-pod9265ab6d_1d0a_42f4_baa7_12e5c42cad61.slice - libcontainer container kubepods-besteffort-pod9265ab6d_1d0a_42f4_baa7_12e5c42cad61.slice. Nov 1 00:14:27.591283 systemd[1]: Created slice kubepods-burstable-pod53c7655e_7d0a_426b_a88e_be70b5c6070d.slice - libcontainer container kubepods-burstable-pod53c7655e_7d0a_426b_a88e_be70b5c6070d.slice. Nov 1 00:14:27.599966 systemd[1]: Created slice kubepods-besteffort-podf11d7d31_f676_4516_b063_ddcb43a2faf5.slice - libcontainer container kubepods-besteffort-podf11d7d31_f676_4516_b063_ddcb43a2faf5.slice. Nov 1 00:14:27.607575 systemd[1]: Created slice kubepods-besteffort-pode75afe96_48a0_4769_9bc6_591261c95345.slice - libcontainer container kubepods-besteffort-pode75afe96_48a0_4769_9bc6_591261c95345.slice. Nov 1 00:14:27.656229 kubelet[2506]: I1101 00:14:27.656152 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9265ab6d-1d0a-42f4-baa7-12e5c42cad61-tigera-ca-bundle\") pod \"calico-kube-controllers-558d5b9ff5-tdbvc\" (UID: \"9265ab6d-1d0a-42f4-baa7-12e5c42cad61\") " pod="calico-system/calico-kube-controllers-558d5b9ff5-tdbvc" Nov 1 00:14:27.656441 kubelet[2506]: I1101 00:14:27.656244 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbwql\" (UniqueName: \"kubernetes.io/projected/9265ab6d-1d0a-42f4-baa7-12e5c42cad61-kube-api-access-wbwql\") pod \"calico-kube-controllers-558d5b9ff5-tdbvc\" (UID: \"9265ab6d-1d0a-42f4-baa7-12e5c42cad61\") " pod="calico-system/calico-kube-controllers-558d5b9ff5-tdbvc" Nov 1 00:14:27.656441 kubelet[2506]: I1101 00:14:27.656291 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f11d7d31-f676-4516-b063-ddcb43a2faf5-calico-apiserver-certs\") pod \"calico-apiserver-b56b4988b-k8vh2\" (UID: \"f11d7d31-f676-4516-b063-ddcb43a2faf5\") " pod="calico-apiserver/calico-apiserver-b56b4988b-k8vh2" Nov 1 00:14:27.656441 kubelet[2506]: I1101 00:14:27.656340 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75afe96-48a0-4769-9bc6-591261c95345-config\") pod \"goldmane-7c778bb748-xs644\" (UID: \"e75afe96-48a0-4769-9bc6-591261c95345\") " pod="calico-system/goldmane-7c778bb748-xs644" Nov 1 00:14:27.656441 kubelet[2506]: I1101 00:14:27.656375 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e75afe96-48a0-4769-9bc6-591261c95345-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-xs644\" (UID: \"e75afe96-48a0-4769-9bc6-591261c95345\") " pod="calico-system/goldmane-7c778bb748-xs644" Nov 1 00:14:27.656441 kubelet[2506]: I1101 00:14:27.656405 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e75afe96-48a0-4769-9bc6-591261c95345-goldmane-key-pair\") pod \"goldmane-7c778bb748-xs644\" (UID: \"e75afe96-48a0-4769-9bc6-591261c95345\") " pod="calico-system/goldmane-7c778bb748-xs644" Nov 1 00:14:27.656591 kubelet[2506]: I1101 00:14:27.656439 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcgn5\" (UniqueName: \"kubernetes.io/projected/e75afe96-48a0-4769-9bc6-591261c95345-kube-api-access-fcgn5\") pod \"goldmane-7c778bb748-xs644\" (UID: \"e75afe96-48a0-4769-9bc6-591261c95345\") " pod="calico-system/goldmane-7c778bb748-xs644" Nov 1 00:14:27.656591 kubelet[2506]: I1101 00:14:27.656539 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drhw7\" (UniqueName: \"kubernetes.io/projected/53c7655e-7d0a-426b-a88e-be70b5c6070d-kube-api-access-drhw7\") pod \"coredns-66bc5c9577-4vmf6\" (UID: \"53c7655e-7d0a-426b-a88e-be70b5c6070d\") " pod="kube-system/coredns-66bc5c9577-4vmf6" Nov 1 00:14:27.656648 kubelet[2506]: I1101 00:14:27.656588 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86f62\" (UniqueName: \"kubernetes.io/projected/f11d7d31-f676-4516-b063-ddcb43a2faf5-kube-api-access-86f62\") pod \"calico-apiserver-b56b4988b-k8vh2\" (UID: \"f11d7d31-f676-4516-b063-ddcb43a2faf5\") " pod="calico-apiserver/calico-apiserver-b56b4988b-k8vh2" Nov 1 00:14:27.656648 kubelet[2506]: I1101 00:14:27.656624 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53c7655e-7d0a-426b-a88e-be70b5c6070d-config-volume\") pod \"coredns-66bc5c9577-4vmf6\" (UID: \"53c7655e-7d0a-426b-a88e-be70b5c6070d\") " pod="kube-system/coredns-66bc5c9577-4vmf6" Nov 1 00:14:27.702612 systemd[1]: Created slice kubepods-besteffort-podd5658fcc_61ca_4e96_9f79_25e33876cacb.slice - libcontainer container kubepods-besteffort-podd5658fcc_61ca_4e96_9f79_25e33876cacb.slice. Nov 1 00:14:27.709819 containerd[1460]: time="2025-11-01T00:14:27.709661572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ln244,Uid:d5658fcc-61ca-4e96-9f79-25e33876cacb,Namespace:calico-system,Attempt:0,}" Nov 1 00:14:27.800610 kubelet[2506]: E1101 00:14:27.799436 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:27.800610 kubelet[2506]: E1101 00:14:27.799579 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:27.801955 containerd[1460]: time="2025-11-01T00:14:27.801910876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:14:27.835765 containerd[1460]: time="2025-11-01T00:14:27.835666910Z" level=error msg="Failed to destroy network for sandbox \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:27.836290 containerd[1460]: time="2025-11-01T00:14:27.836247169Z" level=error msg="encountered an error cleaning up failed sandbox \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:27.836352 containerd[1460]: time="2025-11-01T00:14:27.836322852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ln244,Uid:d5658fcc-61ca-4e96-9f79-25e33876cacb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:27.849313 kubelet[2506]: E1101 00:14:27.849256 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:27.849489 kubelet[2506]: E1101 00:14:27.849343 2506 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ln244" Nov 1 00:14:27.849489 kubelet[2506]: E1101 00:14:27.849372 2506 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ln244" Nov 1 00:14:27.849489 kubelet[2506]: E1101 00:14:27.849445 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ln244_calico-system(d5658fcc-61ca-4e96-9f79-25e33876cacb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ln244_calico-system(d5658fcc-61ca-4e96-9f79-25e33876cacb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ln244" podUID="d5658fcc-61ca-4e96-9f79-25e33876cacb" Nov 1 00:14:27.855341 containerd[1460]: time="2025-11-01T00:14:27.855282023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cdc56d7f5-vs2fv,Uid:88e24afd-1301-4e45-96e8-67af65d033d0,Namespace:calico-system,Attempt:0,}" Nov 1 00:14:27.857632 kubelet[2506]: E1101 00:14:27.857604 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:27.858906 containerd[1460]: time="2025-11-01T00:14:27.858818906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vw47h,Uid:6846208b-d846-430d-8df4-ccfb42c456d3,Namespace:kube-system,Attempt:0,}" Nov 1 00:14:27.875411 containerd[1460]: time="2025-11-01T00:14:27.875340549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b56b4988b-mnxsg,Uid:42d6452e-a1e5-4daf-80fd-e1f205f5b03a,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:14:27.898101 containerd[1460]: time="2025-11-01T00:14:27.897977960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-558d5b9ff5-tdbvc,Uid:9265ab6d-1d0a-42f4-baa7-12e5c42cad61,Namespace:calico-system,Attempt:0,}" Nov 1 00:14:27.900734 kubelet[2506]: E1101 00:14:27.899678 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:27.902266 containerd[1460]: time="2025-11-01T00:14:27.901922378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4vmf6,Uid:53c7655e-7d0a-426b-a88e-be70b5c6070d,Namespace:kube-system,Attempt:0,}" Nov 1 00:14:27.911074 containerd[1460]: time="2025-11-01T00:14:27.911015177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b56b4988b-k8vh2,Uid:f11d7d31-f676-4516-b063-ddcb43a2faf5,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:14:27.928039 containerd[1460]: time="2025-11-01T00:14:27.927952631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xs644,Uid:e75afe96-48a0-4769-9bc6-591261c95345,Namespace:calico-system,Attempt:0,}" Nov 1 00:14:27.954976 containerd[1460]: time="2025-11-01T00:14:27.954906940Z" level=error msg="Failed to destroy network for sandbox \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:27.955609 containerd[1460]: time="2025-11-01T00:14:27.955561209Z" level=error msg="encountered an error cleaning up failed sandbox \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:27.955706 containerd[1460]: time="2025-11-01T00:14:27.955646800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cdc56d7f5-vs2fv,Uid:88e24afd-1301-4e45-96e8-67af65d033d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:27.956080 kubelet[2506]: E1101 00:14:27.956023 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:27.956203 kubelet[2506]: E1101 00:14:27.956107 2506 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cdc56d7f5-vs2fv" Nov 1 00:14:27.956203 kubelet[2506]: E1101 00:14:27.956136 2506 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cdc56d7f5-vs2fv" Nov 1 00:14:27.956344 kubelet[2506]: E1101 00:14:27.956210 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5cdc56d7f5-vs2fv_calico-system(88e24afd-1301-4e45-96e8-67af65d033d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5cdc56d7f5-vs2fv_calico-system(88e24afd-1301-4e45-96e8-67af65d033d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cdc56d7f5-vs2fv" podUID="88e24afd-1301-4e45-96e8-67af65d033d0" Nov 1 00:14:27.968077 containerd[1460]: time="2025-11-01T00:14:27.968008549Z" level=error msg="Failed to destroy network for sandbox \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:27.968724 containerd[1460]: time="2025-11-01T00:14:27.968642941Z" level=error msg="encountered an error cleaning up failed sandbox \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:27.968823 containerd[1460]: time="2025-11-01T00:14:27.968770801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vw47h,Uid:6846208b-d846-430d-8df4-ccfb42c456d3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:27.969153 kubelet[2506]: E1101 00:14:27.969078 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:27.969258 kubelet[2506]: E1101 00:14:27.969157 2506 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vw47h" Nov 1 00:14:27.969258 kubelet[2506]: E1101 00:14:27.969186 2506 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vw47h" Nov 1 00:14:27.969353 kubelet[2506]: E1101 00:14:27.969261 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-vw47h_kube-system(6846208b-d846-430d-8df4-ccfb42c456d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-vw47h_kube-system(6846208b-d846-430d-8df4-ccfb42c456d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-vw47h" podUID="6846208b-d846-430d-8df4-ccfb42c456d3" Nov 1 00:14:28.078947 containerd[1460]: time="2025-11-01T00:14:28.078559094Z" level=error msg="Failed to destroy network for sandbox \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.085754 containerd[1460]: time="2025-11-01T00:14:28.085666121Z" level=error msg="encountered an error cleaning up failed sandbox \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.087841 containerd[1460]: time="2025-11-01T00:14:28.086657734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b56b4988b-mnxsg,Uid:42d6452e-a1e5-4daf-80fd-e1f205f5b03a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.088047 kubelet[2506]: E1101 00:14:28.087548 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.088047 kubelet[2506]: E1101 00:14:28.087625 2506 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b56b4988b-mnxsg" Nov 1 00:14:28.088047 kubelet[2506]: E1101 00:14:28.087652 2506 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b56b4988b-mnxsg" Nov 1 00:14:28.088293 kubelet[2506]: E1101 00:14:28.087753 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b56b4988b-mnxsg_calico-apiserver(42d6452e-a1e5-4daf-80fd-e1f205f5b03a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b56b4988b-mnxsg_calico-apiserver(42d6452e-a1e5-4daf-80fd-e1f205f5b03a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b56b4988b-mnxsg" podUID="42d6452e-a1e5-4daf-80fd-e1f205f5b03a" Nov 1 00:14:28.117395 containerd[1460]: time="2025-11-01T00:14:28.117273124Z" level=error msg="Failed to destroy network for sandbox \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.118141 containerd[1460]: time="2025-11-01T00:14:28.118100328Z" level=error msg="encountered an error cleaning up failed sandbox \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.118577 containerd[1460]: time="2025-11-01T00:14:28.118179637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-558d5b9ff5-tdbvc,Uid:9265ab6d-1d0a-42f4-baa7-12e5c42cad61,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.118965 kubelet[2506]: E1101 00:14:28.118633 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.118965 kubelet[2506]: E1101 00:14:28.118742 2506 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-558d5b9ff5-tdbvc" Nov 1 00:14:28.118965 kubelet[2506]: E1101 00:14:28.118767 2506 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-558d5b9ff5-tdbvc" Nov 1 00:14:28.121230 kubelet[2506]: E1101 00:14:28.118852 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-558d5b9ff5-tdbvc_calico-system(9265ab6d-1d0a-42f4-baa7-12e5c42cad61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-558d5b9ff5-tdbvc_calico-system(9265ab6d-1d0a-42f4-baa7-12e5c42cad61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-558d5b9ff5-tdbvc" podUID="9265ab6d-1d0a-42f4-baa7-12e5c42cad61" Nov 1 00:14:28.121336 containerd[1460]: time="2025-11-01T00:14:28.119028721Z" level=error msg="Failed to destroy network for sandbox \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.122145 containerd[1460]: time="2025-11-01T00:14:28.121940219Z" level=error msg="encountered an error cleaning up failed sandbox \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.122145 containerd[1460]: time="2025-11-01T00:14:28.122047311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xs644,Uid:e75afe96-48a0-4769-9bc6-591261c95345,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.122441 kubelet[2506]: E1101 00:14:28.122381 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.122643 kubelet[2506]: E1101 00:14:28.122470 2506 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xs644" Nov 1 00:14:28.122643 kubelet[2506]: E1101 00:14:28.122499 2506 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xs644" Nov 1 00:14:28.122643 kubelet[2506]: E1101 00:14:28.122584 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-xs644_calico-system(e75afe96-48a0-4769-9bc6-591261c95345)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-xs644_calico-system(e75afe96-48a0-4769-9bc6-591261c95345)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-xs644" podUID="e75afe96-48a0-4769-9bc6-591261c95345" Nov 1 00:14:28.123056 containerd[1460]: time="2025-11-01T00:14:28.122875416Z" level=error msg="Failed to destroy network for sandbox \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.123295 containerd[1460]: time="2025-11-01T00:14:28.123254348Z" level=error msg="encountered an error cleaning up failed sandbox \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.123425 containerd[1460]: time="2025-11-01T00:14:28.123311354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b56b4988b-k8vh2,Uid:f11d7d31-f676-4516-b063-ddcb43a2faf5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.123530 kubelet[2506]: E1101 00:14:28.123467 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.123530 kubelet[2506]: E1101 00:14:28.123494 2506 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b56b4988b-k8vh2" Nov 1 00:14:28.123530 kubelet[2506]: E1101 00:14:28.123512 2506 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b56b4988b-k8vh2" Nov 1 00:14:28.123660 kubelet[2506]: E1101 00:14:28.123546 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b56b4988b-k8vh2_calico-apiserver(f11d7d31-f676-4516-b063-ddcb43a2faf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b56b4988b-k8vh2_calico-apiserver(f11d7d31-f676-4516-b063-ddcb43a2faf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b56b4988b-k8vh2" podUID="f11d7d31-f676-4516-b063-ddcb43a2faf5" Nov 1 00:14:28.131285 containerd[1460]: time="2025-11-01T00:14:28.131216531Z" level=error msg="Failed to destroy network for sandbox \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.131876 containerd[1460]: time="2025-11-01T00:14:28.131840664Z" level=error msg="encountered an error cleaning up failed sandbox \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.131951 containerd[1460]: time="2025-11-01T00:14:28.131913010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4vmf6,Uid:53c7655e-7d0a-426b-a88e-be70b5c6070d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.132310 kubelet[2506]: E1101 00:14:28.132234 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.132310 kubelet[2506]: E1101 00:14:28.132308 2506 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4vmf6" Nov 1 00:14:28.132310 kubelet[2506]: E1101 00:14:28.132330 2506 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4vmf6" Nov 1 00:14:28.132649 kubelet[2506]: E1101 00:14:28.132407 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4vmf6_kube-system(53c7655e-7d0a-426b-a88e-be70b5c6070d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4vmf6_kube-system(53c7655e-7d0a-426b-a88e-be70b5c6070d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4vmf6" podUID="53c7655e-7d0a-426b-a88e-be70b5c6070d" Nov 1 00:14:28.174376 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e-shm.mount: Deactivated successfully. Nov 1 00:14:28.806289 kubelet[2506]: I1101 00:14:28.806228 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Nov 1 00:14:28.808204 kubelet[2506]: I1101 00:14:28.808162 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Nov 1 00:14:28.809888 kubelet[2506]: I1101 00:14:28.809857 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Nov 1 00:14:28.810954 kubelet[2506]: I1101 00:14:28.810912 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Nov 1 00:14:28.831842 containerd[1460]: time="2025-11-01T00:14:28.831792776Z" level=info msg="StopPodSandbox for \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\"" Nov 1 00:14:28.832532 containerd[1460]: time="2025-11-01T00:14:28.831840656Z" level=info msg="StopPodSandbox for \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\"" Nov 1 00:14:28.832570 kubelet[2506]: I1101 00:14:28.832527 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Nov 1 00:14:28.832913 containerd[1460]: time="2025-11-01T00:14:28.831863990Z" level=info msg="StopPodSandbox for \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\"" Nov 1 00:14:28.833402 containerd[1460]: time="2025-11-01T00:14:28.833317239Z" level=info msg="StopPodSandbox for \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\"" Nov 1 00:14:28.839898 containerd[1460]: time="2025-11-01T00:14:28.839550696Z" level=info msg="Ensure that sandbox 627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0 in task-service has been cleanup successfully" Nov 1 00:14:28.839898 containerd[1460]: time="2025-11-01T00:14:28.839596152Z" level=info msg="Ensure that sandbox b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d in task-service has been cleanup successfully" Nov 1 00:14:28.839898 containerd[1460]: time="2025-11-01T00:14:28.839713282Z" level=info msg="StopPodSandbox for \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\"" Nov 1 00:14:28.840028 containerd[1460]: time="2025-11-01T00:14:28.839990683Z" level=info msg="Ensure that sandbox ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8 in task-service has been cleanup successfully" Nov 1 00:14:28.840459 containerd[1460]: time="2025-11-01T00:14:28.840423055Z" level=info msg="Ensure that sandbox e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8 in task-service has been cleanup successfully" Nov 1 00:14:28.841706 containerd[1460]: time="2025-11-01T00:14:28.839555245Z" level=info msg="Ensure that sandbox bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853 in task-service has been cleanup successfully" Nov 1 00:14:28.843257 kubelet[2506]: I1101 00:14:28.843212 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Nov 1 00:14:28.844341 containerd[1460]: time="2025-11-01T00:14:28.844307581Z" level=info msg="StopPodSandbox for \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\"" Nov 1 00:14:28.844516 containerd[1460]: time="2025-11-01T00:14:28.844475867Z" level=info msg="Ensure that sandbox 33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a in task-service has been cleanup successfully" Nov 1 00:14:28.845770 kubelet[2506]: I1101 00:14:28.845634 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Nov 1 00:14:28.846829 containerd[1460]: time="2025-11-01T00:14:28.846445376Z" level=info msg="StopPodSandbox for \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\"" Nov 1 00:14:28.846829 containerd[1460]: time="2025-11-01T00:14:28.846602411Z" level=info msg="Ensure that sandbox f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96 in task-service has been cleanup successfully" Nov 1 00:14:28.850469 kubelet[2506]: I1101 00:14:28.850435 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Nov 1 00:14:28.851175 containerd[1460]: time="2025-11-01T00:14:28.851142668Z" level=info msg="StopPodSandbox for \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\"" Nov 1 00:14:28.852594 containerd[1460]: time="2025-11-01T00:14:28.852559229Z" level=info msg="Ensure that sandbox e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e in task-service has been cleanup successfully" Nov 1 00:14:28.900613 containerd[1460]: time="2025-11-01T00:14:28.900552065Z" level=error msg="StopPodSandbox for \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\" failed" error="failed to destroy network for sandbox \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.902436 kubelet[2506]: E1101 00:14:28.902395 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Nov 1 00:14:28.902656 kubelet[2506]: E1101 00:14:28.902600 2506 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8"} Nov 1 00:14:28.902833 kubelet[2506]: E1101 00:14:28.902743 2506 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6846208b-d846-430d-8df4-ccfb42c456d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:14:28.902833 kubelet[2506]: E1101 00:14:28.902789 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6846208b-d846-430d-8df4-ccfb42c456d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-vw47h" podUID="6846208b-d846-430d-8df4-ccfb42c456d3" Nov 1 00:14:28.903072 containerd[1460]: time="2025-11-01T00:14:28.903044737Z" level=error msg="StopPodSandbox for \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\" failed" error="failed to destroy network for sandbox \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.903420 kubelet[2506]: E1101 00:14:28.903276 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Nov 1 00:14:28.903607 kubelet[2506]: E1101 00:14:28.903513 2506 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a"} Nov 1 00:14:28.903607 kubelet[2506]: E1101 00:14:28.903545 2506 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f11d7d31-f676-4516-b063-ddcb43a2faf5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:14:28.903607 kubelet[2506]: E1101 00:14:28.903572 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f11d7d31-f676-4516-b063-ddcb43a2faf5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b56b4988b-k8vh2" podUID="f11d7d31-f676-4516-b063-ddcb43a2faf5" Nov 1 00:14:28.933792 containerd[1460]: time="2025-11-01T00:14:28.933636242Z" level=error msg="StopPodSandbox for \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\" failed" error="failed to destroy network for sandbox \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.935541 kubelet[2506]: E1101 00:14:28.935486 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Nov 1 00:14:28.935630 kubelet[2506]: E1101 00:14:28.935548 2506 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8"} Nov 1 00:14:28.935630 kubelet[2506]: E1101 00:14:28.935592 2506 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9265ab6d-1d0a-42f4-baa7-12e5c42cad61\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:14:28.935744 kubelet[2506]: E1101 00:14:28.935625 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9265ab6d-1d0a-42f4-baa7-12e5c42cad61\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-558d5b9ff5-tdbvc" podUID="9265ab6d-1d0a-42f4-baa7-12e5c42cad61" Nov 1 00:14:28.946275 containerd[1460]: time="2025-11-01T00:14:28.946176785Z" level=error msg="StopPodSandbox for \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\" failed" error="failed to destroy network for sandbox \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.948133 kubelet[2506]: E1101 00:14:28.948065 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Nov 1 00:14:28.948231 kubelet[2506]: E1101 00:14:28.948144 2506 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96"} Nov 1 00:14:28.948231 kubelet[2506]: E1101 00:14:28.948191 2506 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53c7655e-7d0a-426b-a88e-be70b5c6070d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:14:28.948357 kubelet[2506]: E1101 00:14:28.948228 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53c7655e-7d0a-426b-a88e-be70b5c6070d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4vmf6" podUID="53c7655e-7d0a-426b-a88e-be70b5c6070d" Nov 1 00:14:28.949411 containerd[1460]: time="2025-11-01T00:14:28.949359122Z" level=error msg="StopPodSandbox for \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\" failed" error="failed to destroy network for sandbox \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.949810 kubelet[2506]: E1101 00:14:28.949730 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Nov 1 00:14:28.949810 kubelet[2506]: E1101 00:14:28.949765 2506 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0"} Nov 1 00:14:28.949810 kubelet[2506]: E1101 00:14:28.949799 2506 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42d6452e-a1e5-4daf-80fd-e1f205f5b03a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:14:28.949944 kubelet[2506]: E1101 00:14:28.949819 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42d6452e-a1e5-4daf-80fd-e1f205f5b03a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b56b4988b-mnxsg" podUID="42d6452e-a1e5-4daf-80fd-e1f205f5b03a" Nov 1 00:14:28.956798 containerd[1460]: time="2025-11-01T00:14:28.956713654Z" level=error msg="StopPodSandbox for \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\" failed" error="failed to destroy network for sandbox \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.957107 containerd[1460]: time="2025-11-01T00:14:28.957065525Z" level=error msg="StopPodSandbox for \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\" failed" error="failed to destroy network for sandbox \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.957190 kubelet[2506]: E1101 00:14:28.957133 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Nov 1 00:14:28.957264 kubelet[2506]: E1101 00:14:28.957161 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Nov 1 00:14:28.957264 kubelet[2506]: E1101 00:14:28.957214 2506 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d"} Nov 1 00:14:28.957341 kubelet[2506]: E1101 00:14:28.957256 2506 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e75afe96-48a0-4769-9bc6-591261c95345\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:14:28.957341 kubelet[2506]: E1101 00:14:28.957297 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e75afe96-48a0-4769-9bc6-591261c95345\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-xs644" podUID="e75afe96-48a0-4769-9bc6-591261c95345" Nov 1 00:14:28.957341 kubelet[2506]: E1101 00:14:28.957187 2506 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853"} Nov 1 00:14:28.957341 kubelet[2506]: E1101 00:14:28.957330 2506 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88e24afd-1301-4e45-96e8-67af65d033d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:14:28.957548 kubelet[2506]: E1101 00:14:28.957346 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88e24afd-1301-4e45-96e8-67af65d033d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cdc56d7f5-vs2fv" podUID="88e24afd-1301-4e45-96e8-67af65d033d0" Nov 1 00:14:28.957610 containerd[1460]: time="2025-11-01T00:14:28.957426673Z" level=error msg="StopPodSandbox for \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\" failed" error="failed to destroy network for sandbox \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:14:28.957647 kubelet[2506]: E1101 00:14:28.957593 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Nov 1 00:14:28.957647 kubelet[2506]: E1101 00:14:28.957619 2506 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e"} Nov 1 00:14:28.957647 kubelet[2506]: E1101 00:14:28.957640 2506 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d5658fcc-61ca-4e96-9f79-25e33876cacb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:14:28.957808 kubelet[2506]: E1101 00:14:28.957662 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d5658fcc-61ca-4e96-9f79-25e33876cacb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ln244" podUID="d5658fcc-61ca-4e96-9f79-25e33876cacb" Nov 1 00:14:35.171318 systemd[1]: Started sshd@7-10.0.0.19:22-10.0.0.1:50850.service - OpenSSH per-connection server daemon (10.0.0.1:50850). Nov 1 00:14:35.233419 sshd[3799]: Accepted publickey for core from 10.0.0.1 port 50850 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:14:35.244215 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:14:35.250401 systemd-logind[1438]: New session 8 of user core. Nov 1 00:14:35.256837 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:14:35.450768 sshd[3799]: pam_unix(sshd:session): session closed for user core Nov 1 00:14:35.459178 systemd[1]: sshd@7-10.0.0.19:22-10.0.0.1:50850.service: Deactivated successfully. Nov 1 00:14:35.463901 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:14:35.465585 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:14:35.467375 systemd-logind[1438]: Removed session 8. Nov 1 00:14:37.468413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206624739.mount: Deactivated successfully. Nov 1 00:14:38.884280 containerd[1460]: time="2025-11-01T00:14:38.884135789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:38.886797 containerd[1460]: time="2025-11-01T00:14:38.886666870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:14:38.888354 containerd[1460]: time="2025-11-01T00:14:38.888292109Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:38.890823 containerd[1460]: time="2025-11-01T00:14:38.890763819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:14:38.891595 containerd[1460]: time="2025-11-01T00:14:38.891519337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 11.089549812s" Nov 1 00:14:38.891595 containerd[1460]: time="2025-11-01T00:14:38.891574400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:14:38.906282 containerd[1460]: time="2025-11-01T00:14:38.906223633Z" level=info msg="CreateContainer within sandbox \"4816a61361446a195560714c2a023af5c2ba328e93f10813de8536641e46235a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:14:38.929198 containerd[1460]: time="2025-11-01T00:14:38.929149211Z" level=info msg="CreateContainer within sandbox \"4816a61361446a195560714c2a023af5c2ba328e93f10813de8536641e46235a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7fe70780ce5df6b87368dbd885c4a656165677fce2a3b39354951f8ac9c354e7\"" Nov 1 00:14:38.929794 containerd[1460]: time="2025-11-01T00:14:38.929760068Z" level=info msg="StartContainer for \"7fe70780ce5df6b87368dbd885c4a656165677fce2a3b39354951f8ac9c354e7\"" Nov 1 00:14:38.988013 systemd[1]: Started cri-containerd-7fe70780ce5df6b87368dbd885c4a656165677fce2a3b39354951f8ac9c354e7.scope - libcontainer container 7fe70780ce5df6b87368dbd885c4a656165677fce2a3b39354951f8ac9c354e7. Nov 1 00:14:39.559910 containerd[1460]: time="2025-11-01T00:14:39.559822837Z" level=info msg="StartContainer for \"7fe70780ce5df6b87368dbd885c4a656165677fce2a3b39354951f8ac9c354e7\" returns successfully" Nov 1 00:14:39.605344 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:14:39.607764 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:14:39.698725 containerd[1460]: time="2025-11-01T00:14:39.698642021Z" level=info msg="StopPodSandbox for \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\"" Nov 1 00:14:39.699897 containerd[1460]: time="2025-11-01T00:14:39.699189328Z" level=info msg="StopPodSandbox for \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\"" Nov 1 00:14:39.731600 containerd[1460]: time="2025-11-01T00:14:39.731555470Z" level=info msg="StopPodSandbox for \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\"" Nov 1 00:14:39.893511 kubelet[2506]: E1101 00:14:39.893462 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:39.942752 containerd[1460]: 2025-11-01 00:14:39.810 [INFO][3893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Nov 1 00:14:39.942752 containerd[1460]: 2025-11-01 00:14:39.812 [INFO][3893] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" iface="eth0" netns="/var/run/netns/cni-625ff195-6c14-e30e-8dcb-bdae72c67514" Nov 1 00:14:39.942752 containerd[1460]: 2025-11-01 00:14:39.812 [INFO][3893] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" iface="eth0" netns="/var/run/netns/cni-625ff195-6c14-e30e-8dcb-bdae72c67514" Nov 1 00:14:39.942752 containerd[1460]: 2025-11-01 00:14:39.812 [INFO][3893] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" iface="eth0" netns="/var/run/netns/cni-625ff195-6c14-e30e-8dcb-bdae72c67514" Nov 1 00:14:39.942752 containerd[1460]: 2025-11-01 00:14:39.812 [INFO][3893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Nov 1 00:14:39.942752 containerd[1460]: 2025-11-01 00:14:39.812 [INFO][3893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Nov 1 00:14:39.942752 containerd[1460]: 2025-11-01 00:14:39.918 [INFO][3933] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" HandleID="k8s-pod-network.e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Workload="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:39.942752 containerd[1460]: 2025-11-01 00:14:39.919 [INFO][3933] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:39.942752 containerd[1460]: 2025-11-01 00:14:39.919 [INFO][3933] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:39.942752 containerd[1460]: 2025-11-01 00:14:39.930 [WARNING][3933] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" HandleID="k8s-pod-network.e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Workload="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:39.942752 containerd[1460]: 2025-11-01 00:14:39.930 [INFO][3933] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" HandleID="k8s-pod-network.e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Workload="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:39.942752 containerd[1460]: 2025-11-01 00:14:39.932 [INFO][3933] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:39.942752 containerd[1460]: 2025-11-01 00:14:39.936 [INFO][3893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Nov 1 00:14:39.945988 containerd[1460]: time="2025-11-01T00:14:39.945819384Z" level=info msg="TearDown network for sandbox \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\" successfully" Nov 1 00:14:39.945988 containerd[1460]: time="2025-11-01T00:14:39.945880969Z" level=info msg="StopPodSandbox for \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\" returns successfully" Nov 1 00:14:39.946397 systemd[1]: run-netns-cni\x2d625ff195\x2d6c14\x2de30e\x2d8dcb\x2dbdae72c67514.mount: Deactivated successfully. Nov 1 00:14:39.955624 containerd[1460]: time="2025-11-01T00:14:39.955574993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ln244,Uid:d5658fcc-61ca-4e96-9f79-25e33876cacb,Namespace:calico-system,Attempt:1,}" Nov 1 00:14:39.957182 containerd[1460]: 2025-11-01 00:14:39.850 [INFO][3920] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Nov 1 00:14:39.957182 containerd[1460]: 2025-11-01 00:14:39.850 [INFO][3920] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" iface="eth0" netns="/var/run/netns/cni-56acfd2a-c0e6-9fa0-0956-1e3a638cdaab" Nov 1 00:14:39.957182 containerd[1460]: 2025-11-01 00:14:39.852 [INFO][3920] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" iface="eth0" netns="/var/run/netns/cni-56acfd2a-c0e6-9fa0-0956-1e3a638cdaab" Nov 1 00:14:39.957182 containerd[1460]: 2025-11-01 00:14:39.852 [INFO][3920] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" iface="eth0" netns="/var/run/netns/cni-56acfd2a-c0e6-9fa0-0956-1e3a638cdaab" Nov 1 00:14:39.957182 containerd[1460]: 2025-11-01 00:14:39.853 [INFO][3920] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Nov 1 00:14:39.957182 containerd[1460]: 2025-11-01 00:14:39.853 [INFO][3920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Nov 1 00:14:39.957182 containerd[1460]: 2025-11-01 00:14:39.918 [INFO][3945] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" HandleID="k8s-pod-network.bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Workload="localhost-k8s-whisker--5cdc56d7f5--vs2fv-eth0" Nov 1 00:14:39.957182 containerd[1460]: 2025-11-01 00:14:39.919 [INFO][3945] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:39.957182 containerd[1460]: 2025-11-01 00:14:39.933 [INFO][3945] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:39.957182 containerd[1460]: 2025-11-01 00:14:39.944 [WARNING][3945] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" HandleID="k8s-pod-network.bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Workload="localhost-k8s-whisker--5cdc56d7f5--vs2fv-eth0" Nov 1 00:14:39.957182 containerd[1460]: 2025-11-01 00:14:39.946 [INFO][3945] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" HandleID="k8s-pod-network.bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Workload="localhost-k8s-whisker--5cdc56d7f5--vs2fv-eth0" Nov 1 00:14:39.957182 containerd[1460]: 2025-11-01 00:14:39.949 [INFO][3945] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:39.957182 containerd[1460]: 2025-11-01 00:14:39.953 [INFO][3920] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Nov 1 00:14:39.958474 containerd[1460]: time="2025-11-01T00:14:39.957392073Z" level=info msg="TearDown network for sandbox \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\" successfully" Nov 1 00:14:39.958474 containerd[1460]: time="2025-11-01T00:14:39.957415286Z" level=info msg="StopPodSandbox for \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\" returns successfully" Nov 1 00:14:39.962346 systemd[1]: run-netns-cni\x2d56acfd2a\x2dc0e6\x2d9fa0\x2d0956\x2d1e3a638cdaab.mount: Deactivated successfully. Nov 1 00:14:39.970018 containerd[1460]: 2025-11-01 00:14:39.795 [INFO][3885] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Nov 1 00:14:39.970018 containerd[1460]: 2025-11-01 00:14:39.796 [INFO][3885] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" iface="eth0" netns="/var/run/netns/cni-be1e53ec-ab30-f07a-d14c-110b0e809ca1" Nov 1 00:14:39.970018 containerd[1460]: 2025-11-01 00:14:39.796 [INFO][3885] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" iface="eth0" netns="/var/run/netns/cni-be1e53ec-ab30-f07a-d14c-110b0e809ca1" Nov 1 00:14:39.970018 containerd[1460]: 2025-11-01 00:14:39.804 [INFO][3885] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" iface="eth0" netns="/var/run/netns/cni-be1e53ec-ab30-f07a-d14c-110b0e809ca1" Nov 1 00:14:39.970018 containerd[1460]: 2025-11-01 00:14:39.804 [INFO][3885] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Nov 1 00:14:39.970018 containerd[1460]: 2025-11-01 00:14:39.804 [INFO][3885] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Nov 1 00:14:39.970018 containerd[1460]: 2025-11-01 00:14:39.918 [INFO][3931] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" HandleID="k8s-pod-network.33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Workload="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:39.970018 containerd[1460]: 2025-11-01 00:14:39.918 [INFO][3931] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:39.970018 containerd[1460]: 2025-11-01 00:14:39.949 [INFO][3931] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:39.970018 containerd[1460]: 2025-11-01 00:14:39.956 [WARNING][3931] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" HandleID="k8s-pod-network.33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Workload="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:39.970018 containerd[1460]: 2025-11-01 00:14:39.956 [INFO][3931] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" HandleID="k8s-pod-network.33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Workload="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:39.970018 containerd[1460]: 2025-11-01 00:14:39.961 [INFO][3931] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:39.970018 containerd[1460]: 2025-11-01 00:14:39.964 [INFO][3885] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Nov 1 00:14:39.971354 containerd[1460]: time="2025-11-01T00:14:39.971026992Z" level=info msg="TearDown network for sandbox \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\" successfully" Nov 1 00:14:39.971354 containerd[1460]: time="2025-11-01T00:14:39.971056537Z" level=info msg="StopPodSandbox for \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\" returns successfully" Nov 1 00:14:39.976085 containerd[1460]: time="2025-11-01T00:14:39.976023219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b56b4988b-k8vh2,Uid:f11d7d31-f676-4516-b063-ddcb43a2faf5,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:14:39.976652 systemd[1]: run-netns-cni\x2dbe1e53ec\x2dab30\x2df07a\x2dd14c\x2d110b0e809ca1.mount: Deactivated successfully. Nov 1 00:14:40.049766 kubelet[2506]: I1101 00:14:40.049666 2506 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/88e24afd-1301-4e45-96e8-67af65d033d0-whisker-backend-key-pair\") pod \"88e24afd-1301-4e45-96e8-67af65d033d0\" (UID: \"88e24afd-1301-4e45-96e8-67af65d033d0\") " Nov 1 00:14:40.049766 kubelet[2506]: I1101 00:14:40.049799 2506 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klhb9\" (UniqueName: \"kubernetes.io/projected/88e24afd-1301-4e45-96e8-67af65d033d0-kube-api-access-klhb9\") pod \"88e24afd-1301-4e45-96e8-67af65d033d0\" (UID: \"88e24afd-1301-4e45-96e8-67af65d033d0\") " Nov 1 00:14:40.050129 kubelet[2506]: I1101 00:14:40.049856 2506 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88e24afd-1301-4e45-96e8-67af65d033d0-whisker-ca-bundle\") pod \"88e24afd-1301-4e45-96e8-67af65d033d0\" (UID: \"88e24afd-1301-4e45-96e8-67af65d033d0\") " Nov 1 00:14:40.054876 kubelet[2506]: I1101 00:14:40.054780 2506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88e24afd-1301-4e45-96e8-67af65d033d0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "88e24afd-1301-4e45-96e8-67af65d033d0" (UID: "88e24afd-1301-4e45-96e8-67af65d033d0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:14:40.057168 kubelet[2506]: I1101 00:14:40.057078 2506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e24afd-1301-4e45-96e8-67af65d033d0-kube-api-access-klhb9" (OuterVolumeSpecName: "kube-api-access-klhb9") pod "88e24afd-1301-4e45-96e8-67af65d033d0" (UID: "88e24afd-1301-4e45-96e8-67af65d033d0"). InnerVolumeSpecName "kube-api-access-klhb9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:14:40.058112 kubelet[2506]: I1101 00:14:40.057988 2506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e24afd-1301-4e45-96e8-67af65d033d0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "88e24afd-1301-4e45-96e8-67af65d033d0" (UID: "88e24afd-1301-4e45-96e8-67af65d033d0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:14:40.147536 systemd-networkd[1367]: cali3c6f2809e9d: Link UP Nov 1 00:14:40.147845 systemd-networkd[1367]: cali3c6f2809e9d: Gained carrier Nov 1 00:14:40.150753 kubelet[2506]: I1101 00:14:40.150634 2506 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88e24afd-1301-4e45-96e8-67af65d033d0-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 1 00:14:40.150956 kubelet[2506]: I1101 00:14:40.150914 2506 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/88e24afd-1301-4e45-96e8-67af65d033d0-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 1 00:14:40.150956 kubelet[2506]: I1101 00:14:40.150935 2506 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-klhb9\" (UniqueName: \"kubernetes.io/projected/88e24afd-1301-4e45-96e8-67af65d033d0-kube-api-access-klhb9\") on node \"localhost\" DevicePath \"\"" Nov 1 00:14:40.165192 kubelet[2506]: I1101 00:14:40.165103 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rrkfj" podStartSLOduration=2.557828272 podStartE2EDuration="25.165076084s" podCreationTimestamp="2025-11-01 00:14:15 +0000 UTC" firstStartedPulling="2025-11-01 00:14:16.285458823 +0000 UTC m=+20.713360376" lastFinishedPulling="2025-11-01 00:14:38.892706625 +0000 UTC m=+43.320608188" observedRunningTime="2025-11-01 00:14:39.928683475 +0000 UTC m=+44.356585038" watchObservedRunningTime="2025-11-01 00:14:40.165076084 +0000 UTC m=+44.592977647" Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.020 [INFO][3957] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.048 [INFO][3957] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ln244-eth0 csi-node-driver- calico-system d5658fcc-61ca-4e96-9f79-25e33876cacb 1013 0 2025-11-01 00:14:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ln244 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3c6f2809e9d [] [] }} ContainerID="d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" Namespace="calico-system" Pod="csi-node-driver-ln244" WorkloadEndpoint="localhost-k8s-csi--node--driver--ln244-" Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.048 [INFO][3957] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" Namespace="calico-system" Pod="csi-node-driver-ln244" WorkloadEndpoint="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.094 [INFO][3987] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" HandleID="k8s-pod-network.d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" Workload="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.095 [INFO][3987] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" HandleID="k8s-pod-network.d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" Workload="localhost-k8s-csi--node--driver--ln244-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ln244", "timestamp":"2025-11-01 00:14:40.094884961 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.095 [INFO][3987] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.095 [INFO][3987] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.095 [INFO][3987] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.103 [INFO][3987] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" host="localhost" Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.111 [INFO][3987] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.118 [INFO][3987] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.121 [INFO][3987] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.123 [INFO][3987] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.123 [INFO][3987] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" host="localhost" Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.125 [INFO][3987] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.130 [INFO][3987] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" host="localhost" Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.135 [INFO][3987] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" host="localhost" Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.135 [INFO][3987] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" host="localhost" Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.135 [INFO][3987] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:40.170651 containerd[1460]: 2025-11-01 00:14:40.135 [INFO][3987] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" HandleID="k8s-pod-network.d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" Workload="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:40.171424 containerd[1460]: 2025-11-01 00:14:40.138 [INFO][3957] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" Namespace="calico-system" Pod="csi-node-driver-ln244" WorkloadEndpoint="localhost-k8s-csi--node--driver--ln244-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ln244-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d5658fcc-61ca-4e96-9f79-25e33876cacb", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ln244", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c6f2809e9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:40.171424 containerd[1460]: 2025-11-01 00:14:40.139 [INFO][3957] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" Namespace="calico-system" Pod="csi-node-driver-ln244" WorkloadEndpoint="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:40.171424 containerd[1460]: 2025-11-01 00:14:40.139 [INFO][3957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c6f2809e9d ContainerID="d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" Namespace="calico-system" Pod="csi-node-driver-ln244" WorkloadEndpoint="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:40.171424 containerd[1460]: 2025-11-01 00:14:40.149 [INFO][3957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" Namespace="calico-system" Pod="csi-node-driver-ln244" WorkloadEndpoint="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:40.171424 containerd[1460]: 2025-11-01 00:14:40.151 [INFO][3957] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" Namespace="calico-system" Pod="csi-node-driver-ln244" WorkloadEndpoint="localhost-k8s-csi--node--driver--ln244-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ln244-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d5658fcc-61ca-4e96-9f79-25e33876cacb", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa", Pod:"csi-node-driver-ln244", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c6f2809e9d", MAC:"52:9b:e4:d0:3a:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:40.171424 containerd[1460]: 2025-11-01 00:14:40.166 [INFO][3957] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa" Namespace="calico-system" Pod="csi-node-driver-ln244" WorkloadEndpoint="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:40.198721 containerd[1460]: time="2025-11-01T00:14:40.198588294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:14:40.198721 containerd[1460]: time="2025-11-01T00:14:40.198653987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:14:40.198721 containerd[1460]: time="2025-11-01T00:14:40.198681820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:40.199005 containerd[1460]: time="2025-11-01T00:14:40.198845086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:40.223216 systemd[1]: Started cri-containerd-d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa.scope - libcontainer container d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa. Nov 1 00:14:40.242079 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:14:40.271011 containerd[1460]: time="2025-11-01T00:14:40.270918631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ln244,Uid:d5658fcc-61ca-4e96-9f79-25e33876cacb,Namespace:calico-system,Attempt:1,} returns sandbox id \"d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa\"" Nov 1 00:14:40.276175 containerd[1460]: time="2025-11-01T00:14:40.276002061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:14:40.279583 systemd-networkd[1367]: calib9e6f1000d7: Link UP Nov 1 00:14:40.280634 systemd-networkd[1367]: calib9e6f1000d7: Gained carrier Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.036 [INFO][3968] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.061 [INFO][3968] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0 calico-apiserver-b56b4988b- calico-apiserver f11d7d31-f676-4516-b063-ddcb43a2faf5 1012 0 2025-11-01 00:14:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b56b4988b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b56b4988b-k8vh2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib9e6f1000d7 [] [] }} ContainerID="d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-k8vh2" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-" Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.061 [INFO][3968] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-k8vh2" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.096 [INFO][3994] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" HandleID="k8s-pod-network.d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" Workload="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.097 [INFO][3994] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" HandleID="k8s-pod-network.d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" Workload="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a44a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b56b4988b-k8vh2", "timestamp":"2025-11-01 00:14:40.096895944 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.097 [INFO][3994] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.135 [INFO][3994] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.135 [INFO][3994] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.204 [INFO][3994] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" host="localhost" Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.217 [INFO][3994] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.228 [INFO][3994] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.235 [INFO][3994] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.242 [INFO][3994] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.242 [INFO][3994] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" host="localhost" Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.245 [INFO][3994] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.254 [INFO][3994] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" host="localhost" Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.268 [INFO][3994] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" host="localhost" Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.268 [INFO][3994] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" host="localhost" Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.268 [INFO][3994] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:40.302252 containerd[1460]: 2025-11-01 00:14:40.268 [INFO][3994] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" HandleID="k8s-pod-network.d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" Workload="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:40.303126 containerd[1460]: 2025-11-01 00:14:40.273 [INFO][3968] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-k8vh2" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0", GenerateName:"calico-apiserver-b56b4988b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f11d7d31-f676-4516-b063-ddcb43a2faf5", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b56b4988b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b56b4988b-k8vh2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9e6f1000d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:40.303126 containerd[1460]: 2025-11-01 00:14:40.273 [INFO][3968] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-k8vh2" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:40.303126 containerd[1460]: 2025-11-01 00:14:40.273 [INFO][3968] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9e6f1000d7 ContainerID="d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-k8vh2" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:40.303126 containerd[1460]: 2025-11-01 00:14:40.281 [INFO][3968] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-k8vh2" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:40.303126 containerd[1460]: 2025-11-01 00:14:40.282 [INFO][3968] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-k8vh2" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0", GenerateName:"calico-apiserver-b56b4988b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f11d7d31-f676-4516-b063-ddcb43a2faf5", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b56b4988b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba", Pod:"calico-apiserver-b56b4988b-k8vh2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9e6f1000d7", MAC:"fa:83:5b:99:1b:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:40.303126 containerd[1460]: 2025-11-01 00:14:40.297 [INFO][3968] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-k8vh2" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:40.339897 containerd[1460]: time="2025-11-01T00:14:40.339332880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:14:40.339897 containerd[1460]: time="2025-11-01T00:14:40.339415746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:14:40.339897 containerd[1460]: time="2025-11-01T00:14:40.339430403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:40.339897 containerd[1460]: time="2025-11-01T00:14:40.339590254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:40.377159 systemd[1]: Started cri-containerd-d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba.scope - libcontainer container d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba. Nov 1 00:14:40.396452 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:14:40.428981 containerd[1460]: time="2025-11-01T00:14:40.428682658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b56b4988b-k8vh2,Uid:f11d7d31-f676-4516-b063-ddcb43a2faf5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba\"" Nov 1 00:14:40.470791 systemd[1]: Started sshd@8-10.0.0.19:22-10.0.0.1:49612.service - OpenSSH per-connection server daemon (10.0.0.1:49612). Nov 1 00:14:40.519183 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 49612 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:14:40.522048 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:14:40.527526 systemd-logind[1438]: New session 9 of user core. Nov 1 00:14:40.536125 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:14:40.613526 containerd[1460]: time="2025-11-01T00:14:40.613451551Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:14:40.615724 containerd[1460]: time="2025-11-01T00:14:40.615619099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:14:40.615932 containerd[1460]: time="2025-11-01T00:14:40.615779379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:14:40.616144 kubelet[2506]: E1101 00:14:40.616078 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:14:40.617707 kubelet[2506]: E1101 00:14:40.617639 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:14:40.618028 kubelet[2506]: E1101 00:14:40.617981 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ln244_calico-system(d5658fcc-61ca-4e96-9f79-25e33876cacb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:14:40.620102 containerd[1460]: time="2025-11-01T00:14:40.618657411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:14:40.684574 sshd[4111]: pam_unix(sshd:session): session closed for user core Nov 1 00:14:40.689608 systemd[1]: sshd@8-10.0.0.19:22-10.0.0.1:49612.service: Deactivated successfully. Nov 1 00:14:40.692966 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:14:40.693831 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:14:40.695357 systemd-logind[1438]: Removed session 9. Nov 1 00:14:40.697094 containerd[1460]: time="2025-11-01T00:14:40.697033465Z" level=info msg="StopPodSandbox for \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\"" Nov 1 00:14:40.829486 containerd[1460]: 2025-11-01 00:14:40.769 [INFO][4136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Nov 1 00:14:40.829486 containerd[1460]: 2025-11-01 00:14:40.769 [INFO][4136] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" iface="eth0" netns="/var/run/netns/cni-0ae05cb3-c852-f525-4c32-552ae8931455" Nov 1 00:14:40.829486 containerd[1460]: 2025-11-01 00:14:40.770 [INFO][4136] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" iface="eth0" netns="/var/run/netns/cni-0ae05cb3-c852-f525-4c32-552ae8931455" Nov 1 00:14:40.829486 containerd[1460]: 2025-11-01 00:14:40.770 [INFO][4136] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" iface="eth0" netns="/var/run/netns/cni-0ae05cb3-c852-f525-4c32-552ae8931455" Nov 1 00:14:40.829486 containerd[1460]: 2025-11-01 00:14:40.770 [INFO][4136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Nov 1 00:14:40.829486 containerd[1460]: 2025-11-01 00:14:40.770 [INFO][4136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Nov 1 00:14:40.829486 containerd[1460]: 2025-11-01 00:14:40.804 [INFO][4146] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" HandleID="k8s-pod-network.e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Workload="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:40.829486 containerd[1460]: 2025-11-01 00:14:40.804 [INFO][4146] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:40.829486 containerd[1460]: 2025-11-01 00:14:40.804 [INFO][4146] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:40.829486 containerd[1460]: 2025-11-01 00:14:40.815 [WARNING][4146] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" HandleID="k8s-pod-network.e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Workload="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:40.829486 containerd[1460]: 2025-11-01 00:14:40.815 [INFO][4146] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" HandleID="k8s-pod-network.e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Workload="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:40.829486 containerd[1460]: 2025-11-01 00:14:40.819 [INFO][4146] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:40.829486 containerd[1460]: 2025-11-01 00:14:40.823 [INFO][4136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Nov 1 00:14:40.830440 containerd[1460]: time="2025-11-01T00:14:40.829836938Z" level=info msg="TearDown network for sandbox \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\" successfully" Nov 1 00:14:40.830440 containerd[1460]: time="2025-11-01T00:14:40.829918892Z" level=info msg="StopPodSandbox for \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\" returns successfully" Nov 1 00:14:40.838755 containerd[1460]: time="2025-11-01T00:14:40.838708406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-558d5b9ff5-tdbvc,Uid:9265ab6d-1d0a-42f4-baa7-12e5c42cad61,Namespace:calico-system,Attempt:1,}" Nov 1 00:14:40.908616 kubelet[2506]: I1101 00:14:40.908566 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:14:40.909053 kubelet[2506]: E1101 00:14:40.909034 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:40.909918 systemd[1]: run-netns-cni\x2d0ae05cb3\x2dc852\x2df525\x2d4c32\x2d552ae8931455.mount: Deactivated successfully. Nov 1 00:14:40.910198 systemd[1]: var-lib-kubelet-pods-88e24afd\x2d1301\x2d4e45\x2d96e8\x2d67af65d033d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dklhb9.mount: Deactivated successfully. Nov 1 00:14:40.910414 systemd[1]: var-lib-kubelet-pods-88e24afd\x2d1301\x2d4e45\x2d96e8\x2d67af65d033d0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:14:40.915794 systemd[1]: Removed slice kubepods-besteffort-pod88e24afd_1301_4e45_96e8_67af65d033d0.slice - libcontainer container kubepods-besteffort-pod88e24afd_1301_4e45_96e8_67af65d033d0.slice. Nov 1 00:14:40.939758 containerd[1460]: time="2025-11-01T00:14:40.939330347Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:14:40.942548 containerd[1460]: time="2025-11-01T00:14:40.942476050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:14:40.942819 containerd[1460]: time="2025-11-01T00:14:40.942659335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:14:40.943400 kubelet[2506]: E1101 00:14:40.942964 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:14:40.943400 kubelet[2506]: E1101 00:14:40.943124 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:14:40.943579 kubelet[2506]: E1101 00:14:40.943458 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b56b4988b-k8vh2_calico-apiserver(f11d7d31-f676-4516-b063-ddcb43a2faf5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:14:40.943579 kubelet[2506]: E1101 00:14:40.943523 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b56b4988b-k8vh2" podUID="f11d7d31-f676-4516-b063-ddcb43a2faf5" Nov 1 00:14:40.944647 containerd[1460]: time="2025-11-01T00:14:40.944520367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:14:41.031886 systemd[1]: Created slice kubepods-besteffort-poda74773bf_2487_45d4_8b3d_33b1f685360f.slice - libcontainer container kubepods-besteffort-poda74773bf_2487_45d4_8b3d_33b1f685360f.slice. Nov 1 00:14:41.056737 systemd-networkd[1367]: cali91f3269ae7e: Link UP Nov 1 00:14:41.057269 systemd-networkd[1367]: cali91f3269ae7e: Gained carrier Nov 1 00:14:41.062620 kubelet[2506]: I1101 00:14:41.062567 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7h9w\" (UniqueName: \"kubernetes.io/projected/a74773bf-2487-45d4-8b3d-33b1f685360f-kube-api-access-h7h9w\") pod \"whisker-f77d9cc7f-vdvzf\" (UID: \"a74773bf-2487-45d4-8b3d-33b1f685360f\") " pod="calico-system/whisker-f77d9cc7f-vdvzf" Nov 1 00:14:41.062944 kubelet[2506]: I1101 00:14:41.062922 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a74773bf-2487-45d4-8b3d-33b1f685360f-whisker-backend-key-pair\") pod \"whisker-f77d9cc7f-vdvzf\" (UID: \"a74773bf-2487-45d4-8b3d-33b1f685360f\") " pod="calico-system/whisker-f77d9cc7f-vdvzf" Nov 1 00:14:41.063085 kubelet[2506]: I1101 00:14:41.063069 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a74773bf-2487-45d4-8b3d-33b1f685360f-whisker-ca-bundle\") pod \"whisker-f77d9cc7f-vdvzf\" (UID: \"a74773bf-2487-45d4-8b3d-33b1f685360f\") " pod="calico-system/whisker-f77d9cc7f-vdvzf" Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:40.889 [INFO][4154] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:40.912 [INFO][4154] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0 calico-kube-controllers-558d5b9ff5- calico-system 9265ab6d-1d0a-42f4-baa7-12e5c42cad61 1042 0 2025-11-01 00:14:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:558d5b9ff5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-558d5b9ff5-tdbvc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali91f3269ae7e [] [] }} ContainerID="203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" Namespace="calico-system" Pod="calico-kube-controllers-558d5b9ff5-tdbvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-" Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:40.912 [INFO][4154] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" Namespace="calico-system" Pod="calico-kube-controllers-558d5b9ff5-tdbvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:40.958 [INFO][4169] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" HandleID="k8s-pod-network.203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" Workload="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:40.958 [INFO][4169] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" HandleID="k8s-pod-network.203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" Workload="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-558d5b9ff5-tdbvc", "timestamp":"2025-11-01 00:14:40.958320986 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:40.958 [INFO][4169] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:40.958 [INFO][4169] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:40.958 [INFO][4169] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:40.977 [INFO][4169] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" host="localhost" Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:40.993 [INFO][4169] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:41.019 [INFO][4169] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:41.023 [INFO][4169] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:41.029 [INFO][4169] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:41.029 [INFO][4169] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" host="localhost" Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:41.033 [INFO][4169] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23 Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:41.042 [INFO][4169] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" host="localhost" Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:41.049 [INFO][4169] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" host="localhost" Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:41.049 [INFO][4169] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" host="localhost" Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:41.049 [INFO][4169] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:41.083291 containerd[1460]: 2025-11-01 00:14:41.049 [INFO][4169] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" HandleID="k8s-pod-network.203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" Workload="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:41.084307 containerd[1460]: 2025-11-01 00:14:41.054 [INFO][4154] cni-plugin/k8s.go 418: Populated endpoint ContainerID="203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" Namespace="calico-system" Pod="calico-kube-controllers-558d5b9ff5-tdbvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0", GenerateName:"calico-kube-controllers-558d5b9ff5-", Namespace:"calico-system", SelfLink:"", UID:"9265ab6d-1d0a-42f4-baa7-12e5c42cad61", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"558d5b9ff5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-558d5b9ff5-tdbvc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali91f3269ae7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:41.084307 containerd[1460]: 2025-11-01 00:14:41.054 [INFO][4154] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" Namespace="calico-system" Pod="calico-kube-controllers-558d5b9ff5-tdbvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:41.084307 containerd[1460]: 2025-11-01 00:14:41.054 [INFO][4154] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91f3269ae7e ContainerID="203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" Namespace="calico-system" Pod="calico-kube-controllers-558d5b9ff5-tdbvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:41.084307 containerd[1460]: 2025-11-01 00:14:41.057 [INFO][4154] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" Namespace="calico-system" Pod="calico-kube-controllers-558d5b9ff5-tdbvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:41.084307 containerd[1460]: 2025-11-01 00:14:41.060 [INFO][4154] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" Namespace="calico-system" Pod="calico-kube-controllers-558d5b9ff5-tdbvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0", GenerateName:"calico-kube-controllers-558d5b9ff5-", Namespace:"calico-system", SelfLink:"", UID:"9265ab6d-1d0a-42f4-baa7-12e5c42cad61", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"558d5b9ff5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23", Pod:"calico-kube-controllers-558d5b9ff5-tdbvc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali91f3269ae7e", MAC:"86:73:86:a0:ab:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:41.084307 containerd[1460]: 2025-11-01 00:14:41.079 [INFO][4154] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23" Namespace="calico-system" Pod="calico-kube-controllers-558d5b9ff5-tdbvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:41.130349 containerd[1460]: time="2025-11-01T00:14:41.130096767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:14:41.130628 containerd[1460]: time="2025-11-01T00:14:41.130348248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:14:41.130628 containerd[1460]: time="2025-11-01T00:14:41.130416857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:41.131341 containerd[1460]: time="2025-11-01T00:14:41.130668900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:41.209276 systemd[1]: Started cri-containerd-203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23.scope - libcontainer container 203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23. Nov 1 00:14:41.261998 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:14:41.273836 containerd[1460]: time="2025-11-01T00:14:41.273761551Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:14:41.303882 containerd[1460]: time="2025-11-01T00:14:41.303818284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-558d5b9ff5-tdbvc,Uid:9265ab6d-1d0a-42f4-baa7-12e5c42cad61,Namespace:calico-system,Attempt:1,} returns sandbox id \"203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23\"" Nov 1 00:14:41.483122 containerd[1460]: time="2025-11-01T00:14:41.482958561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:14:41.483772 containerd[1460]: time="2025-11-01T00:14:41.483151303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:14:41.483876 kubelet[2506]: E1101 00:14:41.483667 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:14:41.484138 kubelet[2506]: E1101 00:14:41.484043 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:14:41.484394 kubelet[2506]: E1101 00:14:41.484336 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ln244_calico-system(d5658fcc-61ca-4e96-9f79-25e33876cacb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:14:41.484524 containerd[1460]: time="2025-11-01T00:14:41.484318243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f77d9cc7f-vdvzf,Uid:a74773bf-2487-45d4-8b3d-33b1f685360f,Namespace:calico-system,Attempt:0,}" Nov 1 00:14:41.484571 kubelet[2506]: E1101 00:14:41.484411 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ln244" podUID="d5658fcc-61ca-4e96-9f79-25e33876cacb" Nov 1 00:14:41.485632 containerd[1460]: time="2025-11-01T00:14:41.485477077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:14:41.586349 systemd-networkd[1367]: cali3c6f2809e9d: Gained IPv6LL Nov 1 00:14:41.699283 kubelet[2506]: I1101 00:14:41.699242 2506 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e24afd-1301-4e45-96e8-67af65d033d0" path="/var/lib/kubelet/pods/88e24afd-1301-4e45-96e8-67af65d033d0/volumes" Nov 1 00:14:41.710869 kernel: bpftool[4365]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:14:41.855676 systemd-networkd[1367]: cali9225a046705: Link UP Nov 1 00:14:41.855954 systemd-networkd[1367]: cali9225a046705: Gained carrier Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.742 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--f77d9cc7f--vdvzf-eth0 whisker-f77d9cc7f- calico-system a74773bf-2487-45d4-8b3d-33b1f685360f 1062 0 2025-11-01 00:14:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f77d9cc7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-f77d9cc7f-vdvzf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9225a046705 [] [] }} ContainerID="48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" Namespace="calico-system" Pod="whisker-f77d9cc7f-vdvzf" WorkloadEndpoint="localhost-k8s-whisker--f77d9cc7f--vdvzf-" Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.742 [INFO][4348] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" Namespace="calico-system" Pod="whisker-f77d9cc7f-vdvzf" WorkloadEndpoint="localhost-k8s-whisker--f77d9cc7f--vdvzf-eth0" Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.791 [INFO][4369] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" HandleID="k8s-pod-network.48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" Workload="localhost-k8s-whisker--f77d9cc7f--vdvzf-eth0" Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.791 [INFO][4369] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" HandleID="k8s-pod-network.48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" Workload="localhost-k8s-whisker--f77d9cc7f--vdvzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043e1f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-f77d9cc7f-vdvzf", "timestamp":"2025-11-01 00:14:41.791448019 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.792 [INFO][4369] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.792 [INFO][4369] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.792 [INFO][4369] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.804 [INFO][4369] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" host="localhost" Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.810 [INFO][4369] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.818 [INFO][4369] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.822 [INFO][4369] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.826 [INFO][4369] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.826 [INFO][4369] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" host="localhost" Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.828 [INFO][4369] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.834 [INFO][4369] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" host="localhost" Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.844 [INFO][4369] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" host="localhost" Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.844 [INFO][4369] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" host="localhost" Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.845 [INFO][4369] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:41.875862 containerd[1460]: 2025-11-01 00:14:41.845 [INFO][4369] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" HandleID="k8s-pod-network.48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" Workload="localhost-k8s-whisker--f77d9cc7f--vdvzf-eth0" Nov 1 00:14:41.876621 containerd[1460]: 2025-11-01 00:14:41.851 [INFO][4348] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" Namespace="calico-system" Pod="whisker-f77d9cc7f-vdvzf" WorkloadEndpoint="localhost-k8s-whisker--f77d9cc7f--vdvzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--f77d9cc7f--vdvzf-eth0", GenerateName:"whisker-f77d9cc7f-", Namespace:"calico-system", SelfLink:"", UID:"a74773bf-2487-45d4-8b3d-33b1f685360f", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f77d9cc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-f77d9cc7f-vdvzf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9225a046705", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:41.876621 containerd[1460]: 2025-11-01 00:14:41.851 [INFO][4348] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" Namespace="calico-system" Pod="whisker-f77d9cc7f-vdvzf" WorkloadEndpoint="localhost-k8s-whisker--f77d9cc7f--vdvzf-eth0" Nov 1 00:14:41.876621 containerd[1460]: 2025-11-01 00:14:41.851 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9225a046705 ContainerID="48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" Namespace="calico-system" Pod="whisker-f77d9cc7f-vdvzf" WorkloadEndpoint="localhost-k8s-whisker--f77d9cc7f--vdvzf-eth0" Nov 1 00:14:41.876621 containerd[1460]: 2025-11-01 00:14:41.854 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" Namespace="calico-system" Pod="whisker-f77d9cc7f-vdvzf" WorkloadEndpoint="localhost-k8s-whisker--f77d9cc7f--vdvzf-eth0" Nov 1 00:14:41.876621 containerd[1460]: 2025-11-01 00:14:41.855 [INFO][4348] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" Namespace="calico-system" Pod="whisker-f77d9cc7f-vdvzf" WorkloadEndpoint="localhost-k8s-whisker--f77d9cc7f--vdvzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--f77d9cc7f--vdvzf-eth0", GenerateName:"whisker-f77d9cc7f-", Namespace:"calico-system", SelfLink:"", UID:"a74773bf-2487-45d4-8b3d-33b1f685360f", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f77d9cc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f", Pod:"whisker-f77d9cc7f-vdvzf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9225a046705", MAC:"f6:9f:63:15:49:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:41.876621 containerd[1460]: 2025-11-01 00:14:41.871 [INFO][4348] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f" Namespace="calico-system" Pod="whisker-f77d9cc7f-vdvzf" WorkloadEndpoint="localhost-k8s-whisker--f77d9cc7f--vdvzf-eth0" Nov 1 00:14:41.916920 kubelet[2506]: E1101 00:14:41.916870 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b56b4988b-k8vh2" podUID="f11d7d31-f676-4516-b063-ddcb43a2faf5" Nov 1 00:14:41.921756 kubelet[2506]: E1101 00:14:41.921539 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ln244" podUID="d5658fcc-61ca-4e96-9f79-25e33876cacb" Nov 1 00:14:41.923577 containerd[1460]: time="2025-11-01T00:14:41.923512024Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:14:41.927640 containerd[1460]: time="2025-11-01T00:14:41.927297107Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:14:41.927640 containerd[1460]: time="2025-11-01T00:14:41.927414207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:14:41.927836 kubelet[2506]: E1101 00:14:41.927754 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:14:41.927836 kubelet[2506]: E1101 00:14:41.927807 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:14:41.927947 kubelet[2506]: E1101 00:14:41.927903 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-558d5b9ff5-tdbvc_calico-system(9265ab6d-1d0a-42f4-baa7-12e5c42cad61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:14:41.928037 kubelet[2506]: E1101 00:14:41.927948 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558d5b9ff5-tdbvc" podUID="9265ab6d-1d0a-42f4-baa7-12e5c42cad61" Nov 1 00:14:41.930698 containerd[1460]: time="2025-11-01T00:14:41.930549521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:14:41.930698 containerd[1460]: time="2025-11-01T00:14:41.930640201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:14:41.930698 containerd[1460]: time="2025-11-01T00:14:41.930654327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:41.931092 containerd[1460]: time="2025-11-01T00:14:41.931033809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:41.969202 systemd[1]: Started cri-containerd-48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f.scope - libcontainer container 48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f. Nov 1 00:14:41.976025 kubelet[2506]: I1101 00:14:41.974295 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:14:41.976025 kubelet[2506]: E1101 00:14:41.975106 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:42.022130 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:14:42.070010 containerd[1460]: time="2025-11-01T00:14:42.069957442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f77d9cc7f-vdvzf,Uid:a74773bf-2487-45d4-8b3d-33b1f685360f,Namespace:calico-system,Attempt:0,} returns sandbox id \"48e54ddf2bf68887cc589e84ed41d08c99b68c3858b74b59a8ff3fcc23bfcc3f\"" Nov 1 00:14:42.073564 containerd[1460]: time="2025-11-01T00:14:42.073529325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:14:42.098233 systemd-networkd[1367]: calib9e6f1000d7: Gained IPv6LL Nov 1 00:14:42.162142 systemd-networkd[1367]: cali91f3269ae7e: Gained IPv6LL Nov 1 00:14:42.199649 systemd-networkd[1367]: vxlan.calico: Link UP Nov 1 00:14:42.199663 systemd-networkd[1367]: vxlan.calico: Gained carrier Nov 1 00:14:42.408933 containerd[1460]: time="2025-11-01T00:14:42.407398894Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:14:42.411378 containerd[1460]: time="2025-11-01T00:14:42.411320433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:14:42.411640 containerd[1460]: time="2025-11-01T00:14:42.411583186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:14:42.412144 kubelet[2506]: E1101 00:14:42.412032 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:14:42.412318 kubelet[2506]: E1101 00:14:42.412143 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:14:42.412848 kubelet[2506]: E1101 00:14:42.412709 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-f77d9cc7f-vdvzf_calico-system(a74773bf-2487-45d4-8b3d-33b1f685360f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:14:42.416490 containerd[1460]: time="2025-11-01T00:14:42.416173901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:14:42.698350 containerd[1460]: time="2025-11-01T00:14:42.698156737Z" level=info msg="StopPodSandbox for \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\"" Nov 1 00:14:42.742748 containerd[1460]: time="2025-11-01T00:14:42.741454273Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:14:42.754737 containerd[1460]: time="2025-11-01T00:14:42.752376359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:14:42.754737 containerd[1460]: time="2025-11-01T00:14:42.752553070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:14:42.754951 kubelet[2506]: E1101 00:14:42.752878 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:14:42.754951 kubelet[2506]: E1101 00:14:42.752936 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:14:42.754951 kubelet[2506]: E1101 00:14:42.753063 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-f77d9cc7f-vdvzf_calico-system(a74773bf-2487-45d4-8b3d-33b1f685360f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:14:42.755088 kubelet[2506]: E1101 00:14:42.753238 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f77d9cc7f-vdvzf" podUID="a74773bf-2487-45d4-8b3d-33b1f685360f" Nov 1 00:14:42.875116 containerd[1460]: 2025-11-01 00:14:42.816 [INFO][4552] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Nov 1 00:14:42.875116 containerd[1460]: 2025-11-01 00:14:42.817 [INFO][4552] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" iface="eth0" netns="/var/run/netns/cni-e1b2e0da-afc8-d5a7-e3af-aef6d93d4d7a" Nov 1 00:14:42.875116 containerd[1460]: 2025-11-01 00:14:42.817 [INFO][4552] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" iface="eth0" netns="/var/run/netns/cni-e1b2e0da-afc8-d5a7-e3af-aef6d93d4d7a" Nov 1 00:14:42.875116 containerd[1460]: 2025-11-01 00:14:42.817 [INFO][4552] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" iface="eth0" netns="/var/run/netns/cni-e1b2e0da-afc8-d5a7-e3af-aef6d93d4d7a" Nov 1 00:14:42.875116 containerd[1460]: 2025-11-01 00:14:42.817 [INFO][4552] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Nov 1 00:14:42.875116 containerd[1460]: 2025-11-01 00:14:42.817 [INFO][4552] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Nov 1 00:14:42.875116 containerd[1460]: 2025-11-01 00:14:42.855 [INFO][4573] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" HandleID="k8s-pod-network.f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Workload="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:42.875116 containerd[1460]: 2025-11-01 00:14:42.855 [INFO][4573] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:42.875116 containerd[1460]: 2025-11-01 00:14:42.855 [INFO][4573] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:42.875116 containerd[1460]: 2025-11-01 00:14:42.864 [WARNING][4573] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" HandleID="k8s-pod-network.f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Workload="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:42.875116 containerd[1460]: 2025-11-01 00:14:42.864 [INFO][4573] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" HandleID="k8s-pod-network.f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Workload="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:42.875116 containerd[1460]: 2025-11-01 00:14:42.867 [INFO][4573] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:42.875116 containerd[1460]: 2025-11-01 00:14:42.871 [INFO][4552] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Nov 1 00:14:42.875957 containerd[1460]: time="2025-11-01T00:14:42.875417192Z" level=info msg="TearDown network for sandbox \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\" successfully" Nov 1 00:14:42.875957 containerd[1460]: time="2025-11-01T00:14:42.875506439Z" level=info msg="StopPodSandbox for \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\" returns successfully" Nov 1 00:14:42.882768 kubelet[2506]: E1101 00:14:42.882718 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:42.884164 containerd[1460]: time="2025-11-01T00:14:42.884116897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4vmf6,Uid:53c7655e-7d0a-426b-a88e-be70b5c6070d,Namespace:kube-system,Attempt:1,}" Nov 1 00:14:42.907339 systemd[1]: run-netns-cni\x2de1b2e0da\x2dafc8\x2dd5a7\x2de3af\x2daef6d93d4d7a.mount: Deactivated successfully. Nov 1 00:14:42.919497 kubelet[2506]: E1101 00:14:42.918933 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558d5b9ff5-tdbvc" podUID="9265ab6d-1d0a-42f4-baa7-12e5c42cad61" Nov 1 00:14:42.925037 kubelet[2506]: E1101 00:14:42.924960 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f77d9cc7f-vdvzf" podUID="a74773bf-2487-45d4-8b3d-33b1f685360f" Nov 1 00:14:43.174819 systemd-networkd[1367]: cali8b671c7c584: Link UP Nov 1 00:14:43.176113 systemd-networkd[1367]: cali8b671c7c584: Gained carrier Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.059 [INFO][4584] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--4vmf6-eth0 coredns-66bc5c9577- kube-system 53c7655e-7d0a-426b-a88e-be70b5c6070d 1103 0 2025-11-01 00:14:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-4vmf6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8b671c7c584 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" Namespace="kube-system" Pod="coredns-66bc5c9577-4vmf6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4vmf6-" Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.059 [INFO][4584] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" Namespace="kube-system" Pod="coredns-66bc5c9577-4vmf6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.102 [INFO][4598] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" HandleID="k8s-pod-network.a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" Workload="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.103 [INFO][4598] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" HandleID="k8s-pod-network.a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" Workload="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002872a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-4vmf6", "timestamp":"2025-11-01 00:14:43.102646076 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.103 [INFO][4598] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.104 [INFO][4598] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.104 [INFO][4598] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.119 [INFO][4598] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" host="localhost" Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.128 [INFO][4598] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.137 [INFO][4598] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.140 [INFO][4598] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.143 [INFO][4598] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.143 [INFO][4598] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" host="localhost" Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.146 [INFO][4598] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3 Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.153 [INFO][4598] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" host="localhost" Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.166 [INFO][4598] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" host="localhost" Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.166 [INFO][4598] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" host="localhost" Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.166 [INFO][4598] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:43.227136 containerd[1460]: 2025-11-01 00:14:43.166 [INFO][4598] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" HandleID="k8s-pod-network.a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" Workload="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:43.230133 containerd[1460]: 2025-11-01 00:14:43.170 [INFO][4584] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" Namespace="kube-system" Pod="coredns-66bc5c9577-4vmf6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4vmf6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"53c7655e-7d0a-426b-a88e-be70b5c6070d", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-4vmf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b671c7c584", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:43.230133 containerd[1460]: 2025-11-01 00:14:43.170 [INFO][4584] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" Namespace="kube-system" Pod="coredns-66bc5c9577-4vmf6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:43.230133 containerd[1460]: 2025-11-01 00:14:43.171 [INFO][4584] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b671c7c584 ContainerID="a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" Namespace="kube-system" Pod="coredns-66bc5c9577-4vmf6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:43.230133 containerd[1460]: 2025-11-01 00:14:43.178 [INFO][4584] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" Namespace="kube-system" Pod="coredns-66bc5c9577-4vmf6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:43.230133 containerd[1460]: 2025-11-01 00:14:43.179 [INFO][4584] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" Namespace="kube-system" Pod="coredns-66bc5c9577-4vmf6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4vmf6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"53c7655e-7d0a-426b-a88e-be70b5c6070d", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3", Pod:"coredns-66bc5c9577-4vmf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b671c7c584", MAC:"de:dd:aa:7f:40:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:43.230133 containerd[1460]: 2025-11-01 00:14:43.220 [INFO][4584] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3" Namespace="kube-system" Pod="coredns-66bc5c9577-4vmf6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:43.260903 containerd[1460]: time="2025-11-01T00:14:43.260231861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:14:43.260903 containerd[1460]: time="2025-11-01T00:14:43.260493182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:14:43.260903 containerd[1460]: time="2025-11-01T00:14:43.260557663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:43.263851 containerd[1460]: time="2025-11-01T00:14:43.261844347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:43.303033 systemd[1]: Started cri-containerd-a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3.scope - libcontainer container a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3. Nov 1 00:14:43.329886 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:14:43.378827 containerd[1460]: time="2025-11-01T00:14:43.378757694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4vmf6,Uid:53c7655e-7d0a-426b-a88e-be70b5c6070d,Namespace:kube-system,Attempt:1,} returns sandbox id \"a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3\"" Nov 1 00:14:43.381991 kubelet[2506]: E1101 00:14:43.381950 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:43.399512 containerd[1460]: time="2025-11-01T00:14:43.399458728Z" level=info msg="CreateContainer within sandbox \"a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:14:43.506030 systemd-networkd[1367]: vxlan.calico: Gained IPv6LL Nov 1 00:14:43.518870 containerd[1460]: time="2025-11-01T00:14:43.518785273Z" level=info msg="CreateContainer within sandbox \"a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"18b5f1d1395a861100a08ae9cbb09bb4612e62b4dbde939f36bc10f305b76966\"" Nov 1 00:14:43.519589 containerd[1460]: time="2025-11-01T00:14:43.519546973Z" level=info msg="StartContainer for \"18b5f1d1395a861100a08ae9cbb09bb4612e62b4dbde939f36bc10f305b76966\"" Nov 1 00:14:43.563328 systemd[1]: Started cri-containerd-18b5f1d1395a861100a08ae9cbb09bb4612e62b4dbde939f36bc10f305b76966.scope - libcontainer container 18b5f1d1395a861100a08ae9cbb09bb4612e62b4dbde939f36bc10f305b76966. Nov 1 00:14:43.652106 containerd[1460]: time="2025-11-01T00:14:43.652034724Z" level=info msg="StartContainer for \"18b5f1d1395a861100a08ae9cbb09bb4612e62b4dbde939f36bc10f305b76966\" returns successfully" Nov 1 00:14:43.707370 containerd[1460]: time="2025-11-01T00:14:43.706488291Z" level=info msg="StopPodSandbox for \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\"" Nov 1 00:14:43.709815 containerd[1460]: time="2025-11-01T00:14:43.709365780Z" level=info msg="StopPodSandbox for \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\"" Nov 1 00:14:43.715820 containerd[1460]: time="2025-11-01T00:14:43.712382993Z" level=info msg="StopPodSandbox for \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\"" Nov 1 00:14:43.890883 systemd-networkd[1367]: cali9225a046705: Gained IPv6LL Nov 1 00:14:43.908987 containerd[1460]: 2025-11-01 00:14:43.824 [INFO][4721] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Nov 1 00:14:43.908987 containerd[1460]: 2025-11-01 00:14:43.824 [INFO][4721] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" iface="eth0" netns="/var/run/netns/cni-2105fe70-327f-e54d-03d8-c48a54f54733" Nov 1 00:14:43.908987 containerd[1460]: 2025-11-01 00:14:43.825 [INFO][4721] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" iface="eth0" netns="/var/run/netns/cni-2105fe70-327f-e54d-03d8-c48a54f54733" Nov 1 00:14:43.908987 containerd[1460]: 2025-11-01 00:14:43.825 [INFO][4721] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" iface="eth0" netns="/var/run/netns/cni-2105fe70-327f-e54d-03d8-c48a54f54733" Nov 1 00:14:43.908987 containerd[1460]: 2025-11-01 00:14:43.825 [INFO][4721] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Nov 1 00:14:43.908987 containerd[1460]: 2025-11-01 00:14:43.825 [INFO][4721] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Nov 1 00:14:43.908987 containerd[1460]: 2025-11-01 00:14:43.870 [INFO][4748] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" HandleID="k8s-pod-network.ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Workload="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:43.908987 containerd[1460]: 2025-11-01 00:14:43.872 [INFO][4748] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:43.908987 containerd[1460]: 2025-11-01 00:14:43.872 [INFO][4748] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:43.908987 containerd[1460]: 2025-11-01 00:14:43.886 [WARNING][4748] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" HandleID="k8s-pod-network.ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Workload="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:43.908987 containerd[1460]: 2025-11-01 00:14:43.886 [INFO][4748] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" HandleID="k8s-pod-network.ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Workload="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:43.908987 containerd[1460]: 2025-11-01 00:14:43.892 [INFO][4748] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:43.908987 containerd[1460]: 2025-11-01 00:14:43.904 [INFO][4721] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Nov 1 00:14:43.908987 containerd[1460]: time="2025-11-01T00:14:43.908090360Z" level=info msg="TearDown network for sandbox \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\" successfully" Nov 1 00:14:43.908987 containerd[1460]: time="2025-11-01T00:14:43.908126618Z" level=info msg="StopPodSandbox for \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\" returns successfully" Nov 1 00:14:43.912128 systemd[1]: run-netns-cni\x2d2105fe70\x2d327f\x2de54d\x2d03d8\x2dc48a54f54733.mount: Deactivated successfully. Nov 1 00:14:43.926394 kubelet[2506]: E1101 00:14:43.926324 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:43.928797 kubelet[2506]: E1101 00:14:43.928749 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:43.930497 kubelet[2506]: E1101 00:14:43.929266 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f77d9cc7f-vdvzf" podUID="a74773bf-2487-45d4-8b3d-33b1f685360f" Nov 1 00:14:43.931728 containerd[1460]: time="2025-11-01T00:14:43.931670516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vw47h,Uid:6846208b-d846-430d-8df4-ccfb42c456d3,Namespace:kube-system,Attempt:1,}" Nov 1 00:14:43.947243 containerd[1460]: 2025-11-01 00:14:43.841 [INFO][4720] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Nov 1 00:14:43.947243 containerd[1460]: 2025-11-01 00:14:43.841 [INFO][4720] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" iface="eth0" netns="/var/run/netns/cni-e3ce3453-d26c-2fd9-f46e-f12b75b158dd" Nov 1 00:14:43.947243 containerd[1460]: 2025-11-01 00:14:43.843 [INFO][4720] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" iface="eth0" netns="/var/run/netns/cni-e3ce3453-d26c-2fd9-f46e-f12b75b158dd" Nov 1 00:14:43.947243 containerd[1460]: 2025-11-01 00:14:43.857 [INFO][4720] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" iface="eth0" netns="/var/run/netns/cni-e3ce3453-d26c-2fd9-f46e-f12b75b158dd" Nov 1 00:14:43.947243 containerd[1460]: 2025-11-01 00:14:43.858 [INFO][4720] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Nov 1 00:14:43.947243 containerd[1460]: 2025-11-01 00:14:43.858 [INFO][4720] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Nov 1 00:14:43.947243 containerd[1460]: 2025-11-01 00:14:43.907 [INFO][4758] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" HandleID="k8s-pod-network.627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Workload="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:43.947243 containerd[1460]: 2025-11-01 00:14:43.907 [INFO][4758] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:43.947243 containerd[1460]: 2025-11-01 00:14:43.910 [INFO][4758] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:43.947243 containerd[1460]: 2025-11-01 00:14:43.928 [WARNING][4758] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" HandleID="k8s-pod-network.627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Workload="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:43.947243 containerd[1460]: 2025-11-01 00:14:43.928 [INFO][4758] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" HandleID="k8s-pod-network.627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Workload="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:43.947243 containerd[1460]: 2025-11-01 00:14:43.936 [INFO][4758] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:43.947243 containerd[1460]: 2025-11-01 00:14:43.944 [INFO][4720] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Nov 1 00:14:43.949111 containerd[1460]: time="2025-11-01T00:14:43.949058774Z" level=info msg="TearDown network for sandbox \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\" successfully" Nov 1 00:14:43.949111 containerd[1460]: time="2025-11-01T00:14:43.949097877Z" level=info msg="StopPodSandbox for \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\" returns successfully" Nov 1 00:14:43.952299 systemd[1]: run-netns-cni\x2de3ce3453\x2dd26c\x2d2fd9\x2df46e\x2df12b75b158dd.mount: Deactivated successfully. Nov 1 00:14:43.959290 containerd[1460]: time="2025-11-01T00:14:43.959156491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b56b4988b-mnxsg,Uid:42d6452e-a1e5-4daf-80fd-e1f205f5b03a,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:14:43.982680 kubelet[2506]: I1101 00:14:43.981866 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4vmf6" podStartSLOduration=43.981838201 podStartE2EDuration="43.981838201s" podCreationTimestamp="2025-11-01 00:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:14:43.947594175 +0000 UTC m=+48.375495738" watchObservedRunningTime="2025-11-01 00:14:43.981838201 +0000 UTC m=+48.409739754" Nov 1 00:14:43.984807 containerd[1460]: 2025-11-01 00:14:43.854 [INFO][4722] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Nov 1 00:14:43.984807 containerd[1460]: 2025-11-01 00:14:43.855 [INFO][4722] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" iface="eth0" netns="/var/run/netns/cni-e2758b65-f7c1-1a34-39cc-131bb59813b6" Nov 1 00:14:43.984807 containerd[1460]: 2025-11-01 00:14:43.855 [INFO][4722] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" iface="eth0" netns="/var/run/netns/cni-e2758b65-f7c1-1a34-39cc-131bb59813b6" Nov 1 00:14:43.984807 containerd[1460]: 2025-11-01 00:14:43.856 [INFO][4722] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" iface="eth0" netns="/var/run/netns/cni-e2758b65-f7c1-1a34-39cc-131bb59813b6" Nov 1 00:14:43.984807 containerd[1460]: 2025-11-01 00:14:43.857 [INFO][4722] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Nov 1 00:14:43.984807 containerd[1460]: 2025-11-01 00:14:43.857 [INFO][4722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Nov 1 00:14:43.984807 containerd[1460]: 2025-11-01 00:14:43.914 [INFO][4759] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" HandleID="k8s-pod-network.b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Workload="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:43.984807 containerd[1460]: 2025-11-01 00:14:43.915 [INFO][4759] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:43.984807 containerd[1460]: 2025-11-01 00:14:43.936 [INFO][4759] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:43.984807 containerd[1460]: 2025-11-01 00:14:43.954 [WARNING][4759] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" HandleID="k8s-pod-network.b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Workload="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:43.984807 containerd[1460]: 2025-11-01 00:14:43.954 [INFO][4759] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" HandleID="k8s-pod-network.b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Workload="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:43.984807 containerd[1460]: 2025-11-01 00:14:43.967 [INFO][4759] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:43.984807 containerd[1460]: 2025-11-01 00:14:43.977 [INFO][4722] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Nov 1 00:14:43.987364 containerd[1460]: time="2025-11-01T00:14:43.986641173Z" level=info msg="TearDown network for sandbox \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\" successfully" Nov 1 00:14:43.987364 containerd[1460]: time="2025-11-01T00:14:43.986783702Z" level=info msg="StopPodSandbox for \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\" returns successfully" Nov 1 00:14:43.992382 containerd[1460]: time="2025-11-01T00:14:43.992337382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xs644,Uid:e75afe96-48a0-4769-9bc6-591261c95345,Namespace:calico-system,Attempt:1,}" Nov 1 00:14:44.188238 systemd-networkd[1367]: cali21ae2a0746b: Link UP Nov 1 00:14:44.189804 systemd-networkd[1367]: cali21ae2a0746b: Gained carrier Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.060 [INFO][4775] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--vw47h-eth0 coredns-66bc5c9577- kube-system 6846208b-d846-430d-8df4-ccfb42c456d3 1131 0 2025-11-01 00:14:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-vw47h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali21ae2a0746b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" Namespace="kube-system" Pod="coredns-66bc5c9577-vw47h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--vw47h-" Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.061 [INFO][4775] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" Namespace="kube-system" Pod="coredns-66bc5c9577-vw47h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.111 [INFO][4817] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" HandleID="k8s-pod-network.edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" Workload="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.111 [INFO][4817] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" HandleID="k8s-pod-network.edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" Workload="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-vw47h", "timestamp":"2025-11-01 00:14:44.111115532 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.111 [INFO][4817] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.111 [INFO][4817] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.111 [INFO][4817] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.124 [INFO][4817] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" host="localhost" Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.137 [INFO][4817] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.146 [INFO][4817] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.150 [INFO][4817] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.155 [INFO][4817] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.155 [INFO][4817] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" host="localhost" Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.158 [INFO][4817] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274 Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.165 [INFO][4817] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" host="localhost" Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.176 [INFO][4817] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" host="localhost" Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.176 [INFO][4817] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" host="localhost" Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.176 [INFO][4817] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:44.212172 containerd[1460]: 2025-11-01 00:14:44.176 [INFO][4817] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" HandleID="k8s-pod-network.edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" Workload="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:44.213113 containerd[1460]: 2025-11-01 00:14:44.181 [INFO][4775] cni-plugin/k8s.go 418: Populated endpoint ContainerID="edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" Namespace="kube-system" Pod="coredns-66bc5c9577-vw47h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--vw47h-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6846208b-d846-430d-8df4-ccfb42c456d3", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-vw47h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21ae2a0746b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:44.213113 containerd[1460]: 2025-11-01 00:14:44.181 [INFO][4775] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" Namespace="kube-system" Pod="coredns-66bc5c9577-vw47h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:44.213113 containerd[1460]: 2025-11-01 00:14:44.181 [INFO][4775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21ae2a0746b ContainerID="edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" Namespace="kube-system" Pod="coredns-66bc5c9577-vw47h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:44.213113 containerd[1460]: 2025-11-01 00:14:44.184 [INFO][4775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" Namespace="kube-system" Pod="coredns-66bc5c9577-vw47h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:44.213113 containerd[1460]: 2025-11-01 00:14:44.185 [INFO][4775] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" Namespace="kube-system" Pod="coredns-66bc5c9577-vw47h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--vw47h-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6846208b-d846-430d-8df4-ccfb42c456d3", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274", Pod:"coredns-66bc5c9577-vw47h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21ae2a0746b", MAC:"96:25:69:31:9e:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:44.213113 containerd[1460]: 2025-11-01 00:14:44.205 [INFO][4775] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274" Namespace="kube-system" Pod="coredns-66bc5c9577-vw47h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:44.248486 containerd[1460]: time="2025-11-01T00:14:44.248343157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:14:44.249059 containerd[1460]: time="2025-11-01T00:14:44.248498239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:14:44.249059 containerd[1460]: time="2025-11-01T00:14:44.248530219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:44.249059 containerd[1460]: time="2025-11-01T00:14:44.248657668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:44.279961 systemd[1]: Started cri-containerd-edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274.scope - libcontainer container edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274. Nov 1 00:14:44.297971 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:14:44.330663 containerd[1460]: time="2025-11-01T00:14:44.330600571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vw47h,Uid:6846208b-d846-430d-8df4-ccfb42c456d3,Namespace:kube-system,Attempt:1,} returns sandbox id \"edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274\"" Nov 1 00:14:44.331569 kubelet[2506]: E1101 00:14:44.331541 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:44.485905 systemd-networkd[1367]: cali261ead9739e: Link UP Nov 1 00:14:44.486583 systemd-networkd[1367]: cali261ead9739e: Gained carrier Nov 1 00:14:44.493209 containerd[1460]: time="2025-11-01T00:14:44.492942670Z" level=info msg="CreateContainer within sandbox \"edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.082 [INFO][4788] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0 calico-apiserver-b56b4988b- calico-apiserver 42d6452e-a1e5-4daf-80fd-e1f205f5b03a 1133 0 2025-11-01 00:14:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b56b4988b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b56b4988b-mnxsg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali261ead9739e [] [] }} ContainerID="7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-mnxsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-" Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.083 [INFO][4788] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-mnxsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.165 [INFO][4828] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" HandleID="k8s-pod-network.7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" Workload="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.166 [INFO][4828] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" HandleID="k8s-pod-network.7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" Workload="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042e1e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b56b4988b-mnxsg", "timestamp":"2025-11-01 00:14:44.165987039 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.166 [INFO][4828] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.177 [INFO][4828] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.177 [INFO][4828] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.224 [INFO][4828] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" host="localhost" Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.239 [INFO][4828] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.248 [INFO][4828] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.254 [INFO][4828] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.258 [INFO][4828] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.258 [INFO][4828] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" host="localhost" Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.261 [INFO][4828] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26 Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.286 [INFO][4828] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" host="localhost" Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.477 [INFO][4828] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" host="localhost" Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.477 [INFO][4828] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" host="localhost" Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.477 [INFO][4828] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:44.654468 containerd[1460]: 2025-11-01 00:14:44.477 [INFO][4828] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" HandleID="k8s-pod-network.7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" Workload="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:44.655378 containerd[1460]: 2025-11-01 00:14:44.480 [INFO][4788] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-mnxsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0", GenerateName:"calico-apiserver-b56b4988b-", Namespace:"calico-apiserver", SelfLink:"", UID:"42d6452e-a1e5-4daf-80fd-e1f205f5b03a", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b56b4988b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b56b4988b-mnxsg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali261ead9739e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:44.655378 containerd[1460]: 2025-11-01 00:14:44.481 [INFO][4788] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-mnxsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:44.655378 containerd[1460]: 2025-11-01 00:14:44.481 [INFO][4788] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali261ead9739e ContainerID="7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-mnxsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:44.655378 containerd[1460]: 2025-11-01 00:14:44.486 [INFO][4788] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-mnxsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:44.655378 containerd[1460]: 2025-11-01 00:14:44.488 [INFO][4788] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-mnxsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0", GenerateName:"calico-apiserver-b56b4988b-", Namespace:"calico-apiserver", SelfLink:"", UID:"42d6452e-a1e5-4daf-80fd-e1f205f5b03a", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b56b4988b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26", Pod:"calico-apiserver-b56b4988b-mnxsg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali261ead9739e", MAC:"ce:4d:09:bd:0a:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:44.655378 containerd[1460]: 2025-11-01 00:14:44.642 [INFO][4788] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26" Namespace="calico-apiserver" Pod="calico-apiserver-b56b4988b-mnxsg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:44.670162 containerd[1460]: time="2025-11-01T00:14:44.670088778Z" level=info msg="CreateContainer within sandbox \"edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c465713f9af94c69c342e1a46d498c74a1ca9c933771677b9ae773007870cb9d\"" Nov 1 00:14:44.676735 containerd[1460]: time="2025-11-01T00:14:44.674398284Z" level=info msg="StartContainer for \"c465713f9af94c69c342e1a46d498c74a1ca9c933771677b9ae773007870cb9d\"" Nov 1 00:14:44.716852 containerd[1460]: time="2025-11-01T00:14:44.716668237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:14:44.717812 containerd[1460]: time="2025-11-01T00:14:44.716884363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:14:44.717812 containerd[1460]: time="2025-11-01T00:14:44.716904420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:44.717812 containerd[1460]: time="2025-11-01T00:14:44.717651282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:44.723916 systemd[1]: Started cri-containerd-c465713f9af94c69c342e1a46d498c74a1ca9c933771677b9ae773007870cb9d.scope - libcontainer container c465713f9af94c69c342e1a46d498c74a1ca9c933771677b9ae773007870cb9d. Nov 1 00:14:44.744865 systemd[1]: Started cri-containerd-7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26.scope - libcontainer container 7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26. Nov 1 00:14:44.765898 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:14:44.860232 containerd[1460]: time="2025-11-01T00:14:44.860140080Z" level=info msg="StartContainer for \"c465713f9af94c69c342e1a46d498c74a1ca9c933771677b9ae773007870cb9d\" returns successfully" Nov 1 00:14:44.860523 containerd[1460]: time="2025-11-01T00:14:44.860465170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b56b4988b-mnxsg,Uid:42d6452e-a1e5-4daf-80fd-e1f205f5b03a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26\"" Nov 1 00:14:44.867464 containerd[1460]: time="2025-11-01T00:14:44.866680652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:14:44.880290 systemd-networkd[1367]: cali26aadd30243: Link UP Nov 1 00:14:44.884902 systemd-networkd[1367]: cali26aadd30243: Gained carrier Nov 1 00:14:44.914230 systemd[1]: run-netns-cni\x2de2758b65\x2df7c1\x2d1a34\x2d39cc\x2d131bb59813b6.mount: Deactivated successfully. Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.116 [INFO][4802] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--xs644-eth0 goldmane-7c778bb748- calico-system e75afe96-48a0-4769-9bc6-591261c95345 1132 0 2025-11-01 00:14:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-xs644 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali26aadd30243 [] [] }} ContainerID="dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" Namespace="calico-system" Pod="goldmane-7c778bb748-xs644" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xs644-" Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.116 [INFO][4802] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" Namespace="calico-system" Pod="goldmane-7c778bb748-xs644" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.178 [INFO][4840] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" HandleID="k8s-pod-network.dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" Workload="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.178 [INFO][4840] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" HandleID="k8s-pod-network.dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" Workload="localhost-k8s-goldmane--7c778bb748--xs644-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-xs644", "timestamp":"2025-11-01 00:14:44.178359594 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.178 [INFO][4840] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.477 [INFO][4840] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.477 [INFO][4840] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.644 [INFO][4840] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" host="localhost" Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.658 [INFO][4840] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.678 [INFO][4840] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.683 [INFO][4840] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.690 [INFO][4840] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.690 [INFO][4840] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" host="localhost" Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.697 [INFO][4840] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8 Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.707 [INFO][4840] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" host="localhost" Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.863 [INFO][4840] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" host="localhost" Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.863 [INFO][4840] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" host="localhost" Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.863 [INFO][4840] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:44.917755 containerd[1460]: 2025-11-01 00:14:44.863 [INFO][4840] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" HandleID="k8s-pod-network.dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" Workload="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:44.918753 containerd[1460]: 2025-11-01 00:14:44.870 [INFO][4802] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" Namespace="calico-system" Pod="goldmane-7c778bb748-xs644" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xs644-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--xs644-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"e75afe96-48a0-4769-9bc6-591261c95345", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-xs644", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali26aadd30243", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:44.918753 containerd[1460]: 2025-11-01 00:14:44.871 [INFO][4802] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" Namespace="calico-system" Pod="goldmane-7c778bb748-xs644" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:44.918753 containerd[1460]: 2025-11-01 00:14:44.871 [INFO][4802] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26aadd30243 ContainerID="dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" Namespace="calico-system" Pod="goldmane-7c778bb748-xs644" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:44.918753 containerd[1460]: 2025-11-01 00:14:44.885 [INFO][4802] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" Namespace="calico-system" Pod="goldmane-7c778bb748-xs644" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:44.918753 containerd[1460]: 2025-11-01 00:14:44.890 [INFO][4802] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" Namespace="calico-system" Pod="goldmane-7c778bb748-xs644" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xs644-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--xs644-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"e75afe96-48a0-4769-9bc6-591261c95345", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8", Pod:"goldmane-7c778bb748-xs644", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali26aadd30243", MAC:"fe:a0:6e:4c:7c:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:44.918753 containerd[1460]: 2025-11-01 00:14:44.911 [INFO][4802] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8" Namespace="calico-system" Pod="goldmane-7c778bb748-xs644" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:44.942928 kubelet[2506]: E1101 00:14:44.942877 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:44.943626 kubelet[2506]: E1101 00:14:44.943479 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:44.962760 containerd[1460]: time="2025-11-01T00:14:44.962220899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:14:44.962760 containerd[1460]: time="2025-11-01T00:14:44.962303854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:14:44.962760 containerd[1460]: time="2025-11-01T00:14:44.962322980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:44.963177 containerd[1460]: time="2025-11-01T00:14:44.962719674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:14:44.974996 kubelet[2506]: I1101 00:14:44.974901 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vw47h" podStartSLOduration=44.974844094 podStartE2EDuration="44.974844094s" podCreationTimestamp="2025-11-01 00:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:14:44.973117775 +0000 UTC m=+49.401019358" watchObservedRunningTime="2025-11-01 00:14:44.974844094 +0000 UTC m=+49.402745657" Nov 1 00:14:45.001885 systemd[1]: Started cri-containerd-dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8.scope - libcontainer container dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8. Nov 1 00:14:45.046090 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:14:45.080989 containerd[1460]: time="2025-11-01T00:14:45.080928269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xs644,Uid:e75afe96-48a0-4769-9bc6-591261c95345,Namespace:calico-system,Attempt:1,} returns sandbox id \"dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8\"" Nov 1 00:14:45.106947 systemd-networkd[1367]: cali8b671c7c584: Gained IPv6LL Nov 1 00:14:45.204776 containerd[1460]: time="2025-11-01T00:14:45.204698627Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:14:45.207185 containerd[1460]: time="2025-11-01T00:14:45.207036474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:14:45.207185 containerd[1460]: time="2025-11-01T00:14:45.207094102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:14:45.207423 kubelet[2506]: E1101 00:14:45.207350 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:14:45.207423 kubelet[2506]: E1101 00:14:45.207424 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:14:45.207707 kubelet[2506]: E1101 00:14:45.207656 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b56b4988b-mnxsg_calico-apiserver(42d6452e-a1e5-4daf-80fd-e1f205f5b03a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:14:45.207774 kubelet[2506]: E1101 00:14:45.207736 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b56b4988b-mnxsg" podUID="42d6452e-a1e5-4daf-80fd-e1f205f5b03a" Nov 1 00:14:45.208456 containerd[1460]: time="2025-11-01T00:14:45.208424408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:14:45.534129 containerd[1460]: time="2025-11-01T00:14:45.534050484Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:14:45.536246 containerd[1460]: time="2025-11-01T00:14:45.536190990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:14:45.536357 containerd[1460]: time="2025-11-01T00:14:45.536314353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:14:45.536625 kubelet[2506]: E1101 00:14:45.536575 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:14:45.536718 kubelet[2506]: E1101 00:14:45.536643 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:14:45.536811 kubelet[2506]: E1101 00:14:45.536778 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xs644_calico-system(e75afe96-48a0-4769-9bc6-591261c95345): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:14:45.536886 kubelet[2506]: E1101 00:14:45.536829 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xs644" podUID="e75afe96-48a0-4769-9bc6-591261c95345" Nov 1 00:14:45.618660 systemd-networkd[1367]: cali261ead9739e: Gained IPv6LL Nov 1 00:14:45.714137 systemd[1]: Started sshd@9-10.0.0.19:22-10.0.0.1:49628.service - OpenSSH per-connection server daemon (10.0.0.1:49628). Nov 1 00:14:45.751744 sshd[5043]: Accepted publickey for core from 10.0.0.1 port 49628 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:14:45.754205 sshd[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:14:45.759251 systemd-logind[1438]: New session 10 of user core. Nov 1 00:14:45.769971 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:14:45.900951 sshd[5043]: pam_unix(sshd:session): session closed for user core Nov 1 00:14:45.907012 systemd[1]: sshd@9-10.0.0.19:22-10.0.0.1:49628.service: Deactivated successfully. Nov 1 00:14:45.910069 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:14:45.910975 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:14:45.912296 systemd-logind[1438]: Removed session 10. Nov 1 00:14:45.937951 systemd-networkd[1367]: cali21ae2a0746b: Gained IPv6LL Nov 1 00:14:45.947813 kubelet[2506]: E1101 00:14:45.947763 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:45.948347 kubelet[2506]: E1101 00:14:45.947880 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:45.948943 kubelet[2506]: E1101 00:14:45.948900 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b56b4988b-mnxsg" podUID="42d6452e-a1e5-4daf-80fd-e1f205f5b03a" Nov 1 00:14:45.949485 kubelet[2506]: E1101 00:14:45.949430 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xs644" podUID="e75afe96-48a0-4769-9bc6-591261c95345" Nov 1 00:14:46.066084 systemd-networkd[1367]: cali26aadd30243: Gained IPv6LL Nov 1 00:14:46.953395 kubelet[2506]: E1101 00:14:46.953312 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:14:46.954984 kubelet[2506]: E1101 00:14:46.954936 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xs644" podUID="e75afe96-48a0-4769-9bc6-591261c95345" Nov 1 00:14:50.918125 systemd[1]: Started sshd@10-10.0.0.19:22-10.0.0.1:50170.service - OpenSSH per-connection server daemon (10.0.0.1:50170). Nov 1 00:14:50.955027 sshd[5070]: Accepted publickey for core from 10.0.0.1 port 50170 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:14:50.956826 sshd[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:14:50.961564 systemd-logind[1438]: New session 11 of user core. Nov 1 00:14:50.970867 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:14:51.092596 sshd[5070]: pam_unix(sshd:session): session closed for user core Nov 1 00:14:51.097767 systemd[1]: sshd@10-10.0.0.19:22-10.0.0.1:50170.service: Deactivated successfully. Nov 1 00:14:51.100268 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:14:51.101040 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:14:51.102812 systemd-logind[1438]: Removed session 11. Nov 1 00:14:53.698836 containerd[1460]: time="2025-11-01T00:14:53.698598895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:14:54.000710 containerd[1460]: time="2025-11-01T00:14:54.000479123Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:14:54.007421 containerd[1460]: time="2025-11-01T00:14:54.007289522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:14:54.007421 containerd[1460]: time="2025-11-01T00:14:54.007355018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:14:54.007758 kubelet[2506]: E1101 00:14:54.007663 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:14:54.008163 kubelet[2506]: E1101 00:14:54.007769 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:14:54.008163 kubelet[2506]: E1101 00:14:54.007900 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b56b4988b-k8vh2_calico-apiserver(f11d7d31-f676-4516-b063-ddcb43a2faf5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:14:54.008163 kubelet[2506]: E1101 00:14:54.007950 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b56b4988b-k8vh2" podUID="f11d7d31-f676-4516-b063-ddcb43a2faf5" Nov 1 00:14:54.697708 containerd[1460]: time="2025-11-01T00:14:54.697390443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:14:55.009145 containerd[1460]: time="2025-11-01T00:14:55.008959545Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:14:55.010823 containerd[1460]: time="2025-11-01T00:14:55.010784946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:14:55.011840 containerd[1460]: time="2025-11-01T00:14:55.010886661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:14:55.011940 kubelet[2506]: E1101 00:14:55.011032 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:14:55.011940 kubelet[2506]: E1101 00:14:55.011095 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:14:55.011940 kubelet[2506]: E1101 00:14:55.011198 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ln244_calico-system(d5658fcc-61ca-4e96-9f79-25e33876cacb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:14:55.012720 containerd[1460]: time="2025-11-01T00:14:55.012658629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:14:55.341679 containerd[1460]: time="2025-11-01T00:14:55.341510217Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:14:55.441996 containerd[1460]: time="2025-11-01T00:14:55.441901813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:14:55.442208 containerd[1460]: time="2025-11-01T00:14:55.441961537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:14:55.442289 kubelet[2506]: E1101 00:14:55.442230 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:14:55.442391 kubelet[2506]: E1101 00:14:55.442296 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:14:55.442431 kubelet[2506]: E1101 00:14:55.442400 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ln244_calico-system(d5658fcc-61ca-4e96-9f79-25e33876cacb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:14:55.442529 kubelet[2506]: E1101 00:14:55.442464 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ln244" podUID="d5658fcc-61ca-4e96-9f79-25e33876cacb" Nov 1 00:14:55.681148 containerd[1460]: time="2025-11-01T00:14:55.680861989Z" level=info msg="StopPodSandbox for \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\"" Nov 1 00:14:55.699060 containerd[1460]: time="2025-11-01T00:14:55.698871036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:14:55.781047 containerd[1460]: 2025-11-01 00:14:55.730 [WARNING][5102] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4vmf6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"53c7655e-7d0a-426b-a88e-be70b5c6070d", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3", Pod:"coredns-66bc5c9577-4vmf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b671c7c584", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:55.781047 containerd[1460]: 2025-11-01 00:14:55.730 [INFO][5102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Nov 1 00:14:55.781047 containerd[1460]: 2025-11-01 00:14:55.730 [INFO][5102] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" iface="eth0" netns="" Nov 1 00:14:55.781047 containerd[1460]: 2025-11-01 00:14:55.730 [INFO][5102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Nov 1 00:14:55.781047 containerd[1460]: 2025-11-01 00:14:55.730 [INFO][5102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Nov 1 00:14:55.781047 containerd[1460]: 2025-11-01 00:14:55.763 [INFO][5113] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" HandleID="k8s-pod-network.f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Workload="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:55.781047 containerd[1460]: 2025-11-01 00:14:55.763 [INFO][5113] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:55.781047 containerd[1460]: 2025-11-01 00:14:55.763 [INFO][5113] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:55.781047 containerd[1460]: 2025-11-01 00:14:55.772 [WARNING][5113] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" HandleID="k8s-pod-network.f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Workload="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:55.781047 containerd[1460]: 2025-11-01 00:14:55.772 [INFO][5113] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" HandleID="k8s-pod-network.f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Workload="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:55.781047 containerd[1460]: 2025-11-01 00:14:55.774 [INFO][5113] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:55.781047 containerd[1460]: 2025-11-01 00:14:55.777 [INFO][5102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Nov 1 00:14:55.781760 containerd[1460]: time="2025-11-01T00:14:55.781126031Z" level=info msg="TearDown network for sandbox \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\" successfully" Nov 1 00:14:55.781760 containerd[1460]: time="2025-11-01T00:14:55.781171769Z" level=info msg="StopPodSandbox for \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\" returns successfully" Nov 1 00:14:55.781999 containerd[1460]: time="2025-11-01T00:14:55.781950542Z" level=info msg="RemovePodSandbox for \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\"" Nov 1 00:14:55.784414 containerd[1460]: time="2025-11-01T00:14:55.784360525Z" level=info msg="Forcibly stopping sandbox \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\"" Nov 1 00:14:55.957344 containerd[1460]: 2025-11-01 00:14:55.899 [WARNING][5131] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4vmf6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"53c7655e-7d0a-426b-a88e-be70b5c6070d", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4857b3577758035a95f86e9640bd1b1e5b481c3cd945c0b28e77a551349b9b3", Pod:"coredns-66bc5c9577-4vmf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b671c7c584", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:55.957344 containerd[1460]: 2025-11-01 00:14:55.899 [INFO][5131] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Nov 1 00:14:55.957344 containerd[1460]: 2025-11-01 00:14:55.899 [INFO][5131] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" iface="eth0" netns="" Nov 1 00:14:55.957344 containerd[1460]: 2025-11-01 00:14:55.899 [INFO][5131] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Nov 1 00:14:55.957344 containerd[1460]: 2025-11-01 00:14:55.899 [INFO][5131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Nov 1 00:14:55.957344 containerd[1460]: 2025-11-01 00:14:55.922 [INFO][5139] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" HandleID="k8s-pod-network.f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Workload="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:55.957344 containerd[1460]: 2025-11-01 00:14:55.923 [INFO][5139] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:55.957344 containerd[1460]: 2025-11-01 00:14:55.923 [INFO][5139] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:55.957344 containerd[1460]: 2025-11-01 00:14:55.930 [WARNING][5139] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" HandleID="k8s-pod-network.f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Workload="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:55.957344 containerd[1460]: 2025-11-01 00:14:55.946 [INFO][5139] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" HandleID="k8s-pod-network.f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Workload="localhost-k8s-coredns--66bc5c9577--4vmf6-eth0" Nov 1 00:14:55.957344 containerd[1460]: 2025-11-01 00:14:55.951 [INFO][5139] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:55.957344 containerd[1460]: 2025-11-01 00:14:55.954 [INFO][5131] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96" Nov 1 00:14:55.957868 containerd[1460]: time="2025-11-01T00:14:55.957339185Z" level=info msg="TearDown network for sandbox \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\" successfully" Nov 1 00:14:56.003540 containerd[1460]: time="2025-11-01T00:14:56.003452507Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:14:56.049639 containerd[1460]: time="2025-11-01T00:14:56.049510021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:14:56.050896 containerd[1460]: time="2025-11-01T00:14:56.049614572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:14:56.051016 kubelet[2506]: E1101 00:14:56.049902 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:14:56.051016 kubelet[2506]: E1101 00:14:56.049979 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:14:56.051016 kubelet[2506]: E1101 00:14:56.050098 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-558d5b9ff5-tdbvc_calico-system(9265ab6d-1d0a-42f4-baa7-12e5c42cad61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:14:56.051016 kubelet[2506]: E1101 00:14:56.050181 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558d5b9ff5-tdbvc" podUID="9265ab6d-1d0a-42f4-baa7-12e5c42cad61" Nov 1 00:14:56.051809 containerd[1460]: time="2025-11-01T00:14:56.051710137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:14:56.051880 containerd[1460]: time="2025-11-01T00:14:56.051811381Z" level=info msg="RemovePodSandbox \"f4d2e887169641f68f687da1835da6357f19b67b26114877566c6a26c6acff96\" returns successfully" Nov 1 00:14:56.052431 containerd[1460]: time="2025-11-01T00:14:56.052401683Z" level=info msg="StopPodSandbox for \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\"" Nov 1 00:14:56.104521 systemd[1]: Started sshd@11-10.0.0.19:22-10.0.0.1:34512.service - OpenSSH per-connection server daemon (10.0.0.1:34512). Nov 1 00:14:56.188755 sshd[5172]: Accepted publickey for core from 10.0.0.1 port 34512 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:14:56.191205 sshd[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:14:56.196489 systemd-logind[1438]: New session 12 of user core. Nov 1 00:14:56.203994 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:14:56.372141 containerd[1460]: 2025-11-01 00:14:56.091 [WARNING][5157] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ln244-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d5658fcc-61ca-4e96-9f79-25e33876cacb", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa", Pod:"csi-node-driver-ln244", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c6f2809e9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:56.372141 containerd[1460]: 2025-11-01 00:14:56.092 [INFO][5157] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Nov 1 00:14:56.372141 containerd[1460]: 2025-11-01 00:14:56.092 [INFO][5157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" iface="eth0" netns="" Nov 1 00:14:56.372141 containerd[1460]: 2025-11-01 00:14:56.092 [INFO][5157] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Nov 1 00:14:56.372141 containerd[1460]: 2025-11-01 00:14:56.092 [INFO][5157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Nov 1 00:14:56.372141 containerd[1460]: 2025-11-01 00:14:56.122 [INFO][5166] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" HandleID="k8s-pod-network.e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Workload="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:56.372141 containerd[1460]: 2025-11-01 00:14:56.122 [INFO][5166] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:56.372141 containerd[1460]: 2025-11-01 00:14:56.122 [INFO][5166] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:56.372141 containerd[1460]: 2025-11-01 00:14:56.362 [WARNING][5166] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" HandleID="k8s-pod-network.e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Workload="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:56.372141 containerd[1460]: 2025-11-01 00:14:56.362 [INFO][5166] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" HandleID="k8s-pod-network.e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Workload="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:56.372141 containerd[1460]: 2025-11-01 00:14:56.364 [INFO][5166] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:56.372141 containerd[1460]: 2025-11-01 00:14:56.367 [INFO][5157] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Nov 1 00:14:56.372141 containerd[1460]: time="2025-11-01T00:14:56.372120659Z" level=info msg="TearDown network for sandbox \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\" successfully" Nov 1 00:14:56.373711 containerd[1460]: time="2025-11-01T00:14:56.372159143Z" level=info msg="StopPodSandbox for \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\" returns successfully" Nov 1 00:14:56.373711 containerd[1460]: time="2025-11-01T00:14:56.373066162Z" level=info msg="RemovePodSandbox for \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\"" Nov 1 00:14:56.373711 containerd[1460]: time="2025-11-01T00:14:56.373133290Z" level=info msg="Forcibly stopping sandbox \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\"" Nov 1 00:14:56.457490 containerd[1460]: 2025-11-01 00:14:56.418 [WARNING][5196] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ln244-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d5658fcc-61ca-4e96-9f79-25e33876cacb", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3c2f4aed56a4a02863ffcf336dac7b7305f56b2dbee46a85e641a86eeb847fa", Pod:"csi-node-driver-ln244", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c6f2809e9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:56.457490 containerd[1460]: 2025-11-01 00:14:56.418 [INFO][5196] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Nov 1 00:14:56.457490 containerd[1460]: 2025-11-01 00:14:56.418 [INFO][5196] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" iface="eth0" netns="" Nov 1 00:14:56.457490 containerd[1460]: 2025-11-01 00:14:56.418 [INFO][5196] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Nov 1 00:14:56.457490 containerd[1460]: 2025-11-01 00:14:56.418 [INFO][5196] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Nov 1 00:14:56.457490 containerd[1460]: 2025-11-01 00:14:56.441 [INFO][5205] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" HandleID="k8s-pod-network.e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Workload="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:56.457490 containerd[1460]: 2025-11-01 00:14:56.441 [INFO][5205] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:56.457490 containerd[1460]: 2025-11-01 00:14:56.441 [INFO][5205] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:56.457490 containerd[1460]: 2025-11-01 00:14:56.450 [WARNING][5205] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" HandleID="k8s-pod-network.e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Workload="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:56.457490 containerd[1460]: 2025-11-01 00:14:56.450 [INFO][5205] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" HandleID="k8s-pod-network.e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Workload="localhost-k8s-csi--node--driver--ln244-eth0" Nov 1 00:14:56.457490 containerd[1460]: 2025-11-01 00:14:56.451 [INFO][5205] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:56.457490 containerd[1460]: 2025-11-01 00:14:56.454 [INFO][5196] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e" Nov 1 00:14:56.458111 containerd[1460]: time="2025-11-01T00:14:56.457556998Z" level=info msg="TearDown network for sandbox \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\" successfully" Nov 1 00:14:56.549232 sshd[5172]: pam_unix(sshd:session): session closed for user core Nov 1 00:14:56.560275 systemd[1]: sshd@11-10.0.0.19:22-10.0.0.1:34512.service: Deactivated successfully. Nov 1 00:14:56.563126 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:14:56.565256 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:14:56.571892 containerd[1460]: time="2025-11-01T00:14:56.571682591Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:14:56.571892 containerd[1460]: time="2025-11-01T00:14:56.571805677Z" level=info msg="RemovePodSandbox \"e9c2857cb2885405aa0bcd432d5d28ed29a4d2c908280816ecebb505aa402c8e\" returns successfully" Nov 1 00:14:56.573824 containerd[1460]: time="2025-11-01T00:14:56.572480140Z" level=info msg="StopPodSandbox for \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\"" Nov 1 00:14:56.572711 systemd[1]: Started sshd@12-10.0.0.19:22-10.0.0.1:34526.service - OpenSSH per-connection server daemon (10.0.0.1:34526). Nov 1 00:14:56.574049 systemd-logind[1438]: Removed session 12. Nov 1 00:14:56.608219 sshd[5216]: Accepted publickey for core from 10.0.0.1 port 34526 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:14:56.611021 sshd[5216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:14:56.618090 systemd-logind[1438]: New session 13 of user core. Nov 1 00:14:56.630049 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:14:56.658407 containerd[1460]: 2025-11-01 00:14:56.615 [WARNING][5227] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" WorkloadEndpoint="localhost-k8s-whisker--5cdc56d7f5--vs2fv-eth0" Nov 1 00:14:56.658407 containerd[1460]: 2025-11-01 00:14:56.615 [INFO][5227] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Nov 1 00:14:56.658407 containerd[1460]: 2025-11-01 00:14:56.615 [INFO][5227] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" iface="eth0" netns="" Nov 1 00:14:56.658407 containerd[1460]: 2025-11-01 00:14:56.615 [INFO][5227] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Nov 1 00:14:56.658407 containerd[1460]: 2025-11-01 00:14:56.615 [INFO][5227] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Nov 1 00:14:56.658407 containerd[1460]: 2025-11-01 00:14:56.642 [INFO][5236] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" HandleID="k8s-pod-network.bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Workload="localhost-k8s-whisker--5cdc56d7f5--vs2fv-eth0" Nov 1 00:14:56.658407 containerd[1460]: 2025-11-01 00:14:56.643 [INFO][5236] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:56.658407 containerd[1460]: 2025-11-01 00:14:56.643 [INFO][5236] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:56.658407 containerd[1460]: 2025-11-01 00:14:56.650 [WARNING][5236] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" HandleID="k8s-pod-network.bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Workload="localhost-k8s-whisker--5cdc56d7f5--vs2fv-eth0" Nov 1 00:14:56.658407 containerd[1460]: 2025-11-01 00:14:56.650 [INFO][5236] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" HandleID="k8s-pod-network.bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Workload="localhost-k8s-whisker--5cdc56d7f5--vs2fv-eth0" Nov 1 00:14:56.658407 containerd[1460]: 2025-11-01 00:14:56.652 [INFO][5236] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:56.658407 containerd[1460]: 2025-11-01 00:14:56.655 [INFO][5227] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Nov 1 00:14:56.658407 containerd[1460]: time="2025-11-01T00:14:56.658242122Z" level=info msg="TearDown network for sandbox \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\" successfully" Nov 1 00:14:56.658407 containerd[1460]: time="2025-11-01T00:14:56.658273743Z" level=info msg="StopPodSandbox for \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\" returns successfully" Nov 1 00:14:56.659055 containerd[1460]: time="2025-11-01T00:14:56.658898631Z" level=info msg="RemovePodSandbox for \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\"" Nov 1 00:14:56.659055 containerd[1460]: time="2025-11-01T00:14:56.658932315Z" level=info msg="Forcibly stopping sandbox \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\"" Nov 1 00:14:56.750204 containerd[1460]: 2025-11-01 00:14:56.700 [WARNING][5255] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" WorkloadEndpoint="localhost-k8s-whisker--5cdc56d7f5--vs2fv-eth0" Nov 1 00:14:56.750204 containerd[1460]: 2025-11-01 00:14:56.700 [INFO][5255] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Nov 1 00:14:56.750204 containerd[1460]: 2025-11-01 00:14:56.700 [INFO][5255] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" iface="eth0" netns="" Nov 1 00:14:56.750204 containerd[1460]: 2025-11-01 00:14:56.700 [INFO][5255] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Nov 1 00:14:56.750204 containerd[1460]: 2025-11-01 00:14:56.700 [INFO][5255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Nov 1 00:14:56.750204 containerd[1460]: 2025-11-01 00:14:56.730 [INFO][5269] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" HandleID="k8s-pod-network.bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Workload="localhost-k8s-whisker--5cdc56d7f5--vs2fv-eth0" Nov 1 00:14:56.750204 containerd[1460]: 2025-11-01 00:14:56.731 [INFO][5269] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:56.750204 containerd[1460]: 2025-11-01 00:14:56.731 [INFO][5269] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:56.750204 containerd[1460]: 2025-11-01 00:14:56.739 [WARNING][5269] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" HandleID="k8s-pod-network.bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Workload="localhost-k8s-whisker--5cdc56d7f5--vs2fv-eth0" Nov 1 00:14:56.750204 containerd[1460]: 2025-11-01 00:14:56.739 [INFO][5269] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" HandleID="k8s-pod-network.bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Workload="localhost-k8s-whisker--5cdc56d7f5--vs2fv-eth0" Nov 1 00:14:56.750204 containerd[1460]: 2025-11-01 00:14:56.742 [INFO][5269] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:56.750204 containerd[1460]: 2025-11-01 00:14:56.746 [INFO][5255] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853" Nov 1 00:14:56.750866 containerd[1460]: time="2025-11-01T00:14:56.750219266Z" level=info msg="TearDown network for sandbox \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\" successfully" Nov 1 00:14:56.757107 containerd[1460]: time="2025-11-01T00:14:56.756991226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:14:56.757217 containerd[1460]: time="2025-11-01T00:14:56.757140001Z" level=info msg="RemovePodSandbox \"bd995bc170348fdcab3e87494577eb8ea69d3441183699832501cb98b6631853\" returns successfully" Nov 1 00:14:56.758824 containerd[1460]: time="2025-11-01T00:14:56.758764664Z" level=info msg="StopPodSandbox for \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\"" Nov 1 00:14:56.888665 sshd[5216]: pam_unix(sshd:session): session closed for user core Nov 1 00:14:56.915113 systemd[1]: Started sshd@13-10.0.0.19:22-10.0.0.1:34528.service - OpenSSH per-connection server daemon (10.0.0.1:34528). Nov 1 00:14:56.915863 systemd[1]: sshd@12-10.0.0.19:22-10.0.0.1:34526.service: Deactivated successfully. Nov 1 00:14:56.923302 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:14:56.928374 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:14:56.933834 systemd-logind[1438]: Removed session 13. Nov 1 00:14:56.988752 sshd[5302]: Accepted publickey for core from 10.0.0.1 port 34528 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:14:56.991626 sshd[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:14:57.001005 containerd[1460]: 2025-11-01 00:14:56.880 [WARNING][5288] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--xs644-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"e75afe96-48a0-4769-9bc6-591261c95345", ResourceVersion:"1211", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8", Pod:"goldmane-7c778bb748-xs644", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali26aadd30243", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:57.001005 containerd[1460]: 2025-11-01 00:14:56.880 [INFO][5288] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Nov 1 00:14:57.001005 containerd[1460]: 2025-11-01 00:14:56.880 [INFO][5288] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" iface="eth0" netns="" Nov 1 00:14:57.001005 containerd[1460]: 2025-11-01 00:14:56.882 [INFO][5288] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Nov 1 00:14:57.001005 containerd[1460]: 2025-11-01 00:14:56.882 [INFO][5288] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Nov 1 00:14:57.001005 containerd[1460]: 2025-11-01 00:14:56.978 [INFO][5296] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" HandleID="k8s-pod-network.b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Workload="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:57.001005 containerd[1460]: 2025-11-01 00:14:56.980 [INFO][5296] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:57.001005 containerd[1460]: 2025-11-01 00:14:56.980 [INFO][5296] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:57.001005 containerd[1460]: 2025-11-01 00:14:56.992 [WARNING][5296] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" HandleID="k8s-pod-network.b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Workload="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:57.001005 containerd[1460]: 2025-11-01 00:14:56.992 [INFO][5296] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" HandleID="k8s-pod-network.b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Workload="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:57.001005 containerd[1460]: 2025-11-01 00:14:56.993 [INFO][5296] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:57.001005 containerd[1460]: 2025-11-01 00:14:56.996 [INFO][5288] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Nov 1 00:14:57.001005 containerd[1460]: time="2025-11-01T00:14:57.000816784Z" level=info msg="TearDown network for sandbox \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\" successfully" Nov 1 00:14:57.001005 containerd[1460]: time="2025-11-01T00:14:57.000846701Z" level=info msg="StopPodSandbox for \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\" returns successfully" Nov 1 00:14:57.000869 systemd-logind[1438]: New session 14 of user core. Nov 1 00:14:57.001610 containerd[1460]: time="2025-11-01T00:14:57.001573964Z" level=info msg="RemovePodSandbox for \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\"" Nov 1 00:14:57.001810 containerd[1460]: time="2025-11-01T00:14:57.001642465Z" level=info msg="Forcibly stopping sandbox \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\"" Nov 1 00:14:57.006077 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:14:57.128146 containerd[1460]: 2025-11-01 00:14:57.067 [WARNING][5319] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--xs644-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"e75afe96-48a0-4769-9bc6-591261c95345", ResourceVersion:"1211", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dab05fdb64a7b048f47961370dc50c0482e37f462a76f1513ac69cc49d7e19e8", Pod:"goldmane-7c778bb748-xs644", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali26aadd30243", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:57.128146 containerd[1460]: 2025-11-01 00:14:57.068 [INFO][5319] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Nov 1 00:14:57.128146 containerd[1460]: 2025-11-01 00:14:57.068 [INFO][5319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" iface="eth0" netns="" Nov 1 00:14:57.128146 containerd[1460]: 2025-11-01 00:14:57.068 [INFO][5319] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Nov 1 00:14:57.128146 containerd[1460]: 2025-11-01 00:14:57.068 [INFO][5319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Nov 1 00:14:57.128146 containerd[1460]: 2025-11-01 00:14:57.111 [INFO][5335] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" HandleID="k8s-pod-network.b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Workload="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:57.128146 containerd[1460]: 2025-11-01 00:14:57.111 [INFO][5335] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:57.128146 containerd[1460]: 2025-11-01 00:14:57.111 [INFO][5335] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:57.128146 containerd[1460]: 2025-11-01 00:14:57.118 [WARNING][5335] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" HandleID="k8s-pod-network.b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Workload="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:57.128146 containerd[1460]: 2025-11-01 00:14:57.118 [INFO][5335] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" HandleID="k8s-pod-network.b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Workload="localhost-k8s-goldmane--7c778bb748--xs644-eth0" Nov 1 00:14:57.128146 containerd[1460]: 2025-11-01 00:14:57.121 [INFO][5335] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:57.128146 containerd[1460]: 2025-11-01 00:14:57.124 [INFO][5319] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d" Nov 1 00:14:57.129088 containerd[1460]: time="2025-11-01T00:14:57.128229422Z" level=info msg="TearDown network for sandbox \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\" successfully" Nov 1 00:14:57.134041 containerd[1460]: time="2025-11-01T00:14:57.133894444Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:14:57.134041 containerd[1460]: time="2025-11-01T00:14:57.133994185Z" level=info msg="RemovePodSandbox \"b252001651e475a7d4aca0a09d2dd3f2c165010717bfced6d8a1296f31a9538d\" returns successfully" Nov 1 00:14:57.134680 containerd[1460]: time="2025-11-01T00:14:57.134638719Z" level=info msg="StopPodSandbox for \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\"" Nov 1 00:14:57.173008 sshd[5302]: pam_unix(sshd:session): session closed for user core Nov 1 00:14:57.179401 systemd[1]: sshd@13-10.0.0.19:22-10.0.0.1:34528.service: Deactivated successfully. Nov 1 00:14:57.184364 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:14:57.186732 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:14:57.188427 systemd-logind[1438]: Removed session 14. Nov 1 00:14:57.234855 containerd[1460]: 2025-11-01 00:14:57.188 [WARNING][5354] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0", GenerateName:"calico-kube-controllers-558d5b9ff5-", Namespace:"calico-system", SelfLink:"", UID:"9265ab6d-1d0a-42f4-baa7-12e5c42cad61", ResourceVersion:"1263", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"558d5b9ff5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23", Pod:"calico-kube-controllers-558d5b9ff5-tdbvc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali91f3269ae7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:57.234855 containerd[1460]: 2025-11-01 00:14:57.189 [INFO][5354] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Nov 1 00:14:57.234855 containerd[1460]: 2025-11-01 00:14:57.189 [INFO][5354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" iface="eth0" netns="" Nov 1 00:14:57.234855 containerd[1460]: 2025-11-01 00:14:57.189 [INFO][5354] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Nov 1 00:14:57.234855 containerd[1460]: 2025-11-01 00:14:57.189 [INFO][5354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Nov 1 00:14:57.234855 containerd[1460]: 2025-11-01 00:14:57.216 [INFO][5365] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" HandleID="k8s-pod-network.e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Workload="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:57.234855 containerd[1460]: 2025-11-01 00:14:57.216 [INFO][5365] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:57.234855 containerd[1460]: 2025-11-01 00:14:57.216 [INFO][5365] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:57.234855 containerd[1460]: 2025-11-01 00:14:57.224 [WARNING][5365] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" HandleID="k8s-pod-network.e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Workload="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:57.234855 containerd[1460]: 2025-11-01 00:14:57.224 [INFO][5365] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" HandleID="k8s-pod-network.e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Workload="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:57.234855 containerd[1460]: 2025-11-01 00:14:57.226 [INFO][5365] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:57.234855 containerd[1460]: 2025-11-01 00:14:57.231 [INFO][5354] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Nov 1 00:14:57.235455 containerd[1460]: time="2025-11-01T00:14:57.234919337Z" level=info msg="TearDown network for sandbox \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\" successfully" Nov 1 00:14:57.235455 containerd[1460]: time="2025-11-01T00:14:57.234958192Z" level=info msg="StopPodSandbox for \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\" returns successfully" Nov 1 00:14:57.235864 containerd[1460]: time="2025-11-01T00:14:57.235819010Z" level=info msg="RemovePodSandbox for \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\"" Nov 1 00:14:57.235918 containerd[1460]: time="2025-11-01T00:14:57.235874767Z" level=info msg="Forcibly stopping sandbox \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\"" Nov 1 00:14:57.320480 containerd[1460]: 2025-11-01 00:14:57.279 [WARNING][5383] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0", GenerateName:"calico-kube-controllers-558d5b9ff5-", Namespace:"calico-system", SelfLink:"", UID:"9265ab6d-1d0a-42f4-baa7-12e5c42cad61", ResourceVersion:"1263", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"558d5b9ff5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"203c9db63bdb96e1305f1948f868856865a4cc1b91c5397467ad08a127ffcc23", Pod:"calico-kube-controllers-558d5b9ff5-tdbvc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali91f3269ae7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:57.320480 containerd[1460]: 2025-11-01 00:14:57.279 [INFO][5383] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Nov 1 00:14:57.320480 containerd[1460]: 2025-11-01 00:14:57.279 [INFO][5383] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" iface="eth0" netns="" Nov 1 00:14:57.320480 containerd[1460]: 2025-11-01 00:14:57.279 [INFO][5383] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Nov 1 00:14:57.320480 containerd[1460]: 2025-11-01 00:14:57.279 [INFO][5383] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Nov 1 00:14:57.320480 containerd[1460]: 2025-11-01 00:14:57.303 [INFO][5392] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" HandleID="k8s-pod-network.e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Workload="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:57.320480 containerd[1460]: 2025-11-01 00:14:57.303 [INFO][5392] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:57.320480 containerd[1460]: 2025-11-01 00:14:57.303 [INFO][5392] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:57.320480 containerd[1460]: 2025-11-01 00:14:57.311 [WARNING][5392] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" HandleID="k8s-pod-network.e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Workload="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:57.320480 containerd[1460]: 2025-11-01 00:14:57.311 [INFO][5392] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" HandleID="k8s-pod-network.e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Workload="localhost-k8s-calico--kube--controllers--558d5b9ff5--tdbvc-eth0" Nov 1 00:14:57.320480 containerd[1460]: 2025-11-01 00:14:57.313 [INFO][5392] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:57.320480 containerd[1460]: 2025-11-01 00:14:57.317 [INFO][5383] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8" Nov 1 00:14:57.321272 containerd[1460]: time="2025-11-01T00:14:57.321223012Z" level=info msg="TearDown network for sandbox \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\" successfully" Nov 1 00:14:57.326210 containerd[1460]: time="2025-11-01T00:14:57.326172813Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:14:57.326291 containerd[1460]: time="2025-11-01T00:14:57.326246314Z" level=info msg="RemovePodSandbox \"e05484c5f8e715b813d610b772e40d4c448da2a07ae46b6141ffc8f9c30c62b8\" returns successfully" Nov 1 00:14:57.326888 containerd[1460]: time="2025-11-01T00:14:57.326858336Z" level=info msg="StopPodSandbox for \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\"" Nov 1 00:14:57.410885 containerd[1460]: 2025-11-01 00:14:57.363 [WARNING][5409] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0", GenerateName:"calico-apiserver-b56b4988b-", Namespace:"calico-apiserver", SelfLink:"", UID:"42d6452e-a1e5-4daf-80fd-e1f205f5b03a", ResourceVersion:"1194", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b56b4988b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26", Pod:"calico-apiserver-b56b4988b-mnxsg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali261ead9739e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:57.410885 containerd[1460]: 2025-11-01 00:14:57.364 [INFO][5409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Nov 1 00:14:57.410885 containerd[1460]: 2025-11-01 00:14:57.364 [INFO][5409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" iface="eth0" netns="" Nov 1 00:14:57.410885 containerd[1460]: 2025-11-01 00:14:57.364 [INFO][5409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Nov 1 00:14:57.410885 containerd[1460]: 2025-11-01 00:14:57.364 [INFO][5409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Nov 1 00:14:57.410885 containerd[1460]: 2025-11-01 00:14:57.393 [INFO][5418] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" HandleID="k8s-pod-network.627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Workload="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:57.410885 containerd[1460]: 2025-11-01 00:14:57.394 [INFO][5418] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:57.410885 containerd[1460]: 2025-11-01 00:14:57.394 [INFO][5418] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:57.410885 containerd[1460]: 2025-11-01 00:14:57.401 [WARNING][5418] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" HandleID="k8s-pod-network.627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Workload="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:57.410885 containerd[1460]: 2025-11-01 00:14:57.402 [INFO][5418] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" HandleID="k8s-pod-network.627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Workload="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:57.410885 containerd[1460]: 2025-11-01 00:14:57.405 [INFO][5418] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:57.410885 containerd[1460]: 2025-11-01 00:14:57.408 [INFO][5409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Nov 1 00:14:57.411482 containerd[1460]: time="2025-11-01T00:14:57.410963880Z" level=info msg="TearDown network for sandbox \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\" successfully" Nov 1 00:14:57.411482 containerd[1460]: time="2025-11-01T00:14:57.411000059Z" level=info msg="StopPodSandbox for \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\" returns successfully" Nov 1 00:14:57.411783 containerd[1460]: time="2025-11-01T00:14:57.411675523Z" level=info msg="RemovePodSandbox for \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\"" Nov 1 00:14:57.411783 containerd[1460]: time="2025-11-01T00:14:57.411759454Z" level=info msg="Forcibly stopping sandbox \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\"" Nov 1 00:14:57.485416 containerd[1460]: 2025-11-01 00:14:57.448 [WARNING][5435] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0", GenerateName:"calico-apiserver-b56b4988b-", Namespace:"calico-apiserver", SelfLink:"", UID:"42d6452e-a1e5-4daf-80fd-e1f205f5b03a", ResourceVersion:"1194", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b56b4988b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f6639d8dbb06fb48840e2caf468d552e6b0dd2557a0e4416c35edc762330d26", Pod:"calico-apiserver-b56b4988b-mnxsg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali261ead9739e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:57.485416 containerd[1460]: 2025-11-01 00:14:57.449 [INFO][5435] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Nov 1 00:14:57.485416 containerd[1460]: 2025-11-01 00:14:57.449 [INFO][5435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" iface="eth0" netns="" Nov 1 00:14:57.485416 containerd[1460]: 2025-11-01 00:14:57.449 [INFO][5435] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Nov 1 00:14:57.485416 containerd[1460]: 2025-11-01 00:14:57.449 [INFO][5435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Nov 1 00:14:57.485416 containerd[1460]: 2025-11-01 00:14:57.471 [INFO][5444] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" HandleID="k8s-pod-network.627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Workload="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:57.485416 containerd[1460]: 2025-11-01 00:14:57.471 [INFO][5444] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:57.485416 containerd[1460]: 2025-11-01 00:14:57.471 [INFO][5444] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:57.485416 containerd[1460]: 2025-11-01 00:14:57.477 [WARNING][5444] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" HandleID="k8s-pod-network.627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Workload="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:57.485416 containerd[1460]: 2025-11-01 00:14:57.477 [INFO][5444] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" HandleID="k8s-pod-network.627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Workload="localhost-k8s-calico--apiserver--b56b4988b--mnxsg-eth0" Nov 1 00:14:57.485416 containerd[1460]: 2025-11-01 00:14:57.479 [INFO][5444] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:57.485416 containerd[1460]: 2025-11-01 00:14:57.482 [INFO][5435] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0" Nov 1 00:14:57.485416 containerd[1460]: time="2025-11-01T00:14:57.485389923Z" level=info msg="TearDown network for sandbox \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\" successfully" Nov 1 00:14:57.490598 containerd[1460]: time="2025-11-01T00:14:57.490545489Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:14:57.490656 containerd[1460]: time="2025-11-01T00:14:57.490605724Z" level=info msg="RemovePodSandbox \"627540608567e0883762d8c928fdcb6300502023b5cc8bc23fb3513c8eb8dcb0\" returns successfully" Nov 1 00:14:57.491156 containerd[1460]: time="2025-11-01T00:14:57.491127894Z" level=info msg="StopPodSandbox for \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\"" Nov 1 00:14:57.567624 containerd[1460]: 2025-11-01 00:14:57.527 [WARNING][5461] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0", GenerateName:"calico-apiserver-b56b4988b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f11d7d31-f676-4516-b063-ddcb43a2faf5", ResourceVersion:"1245", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b56b4988b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba", Pod:"calico-apiserver-b56b4988b-k8vh2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9e6f1000d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:57.567624 containerd[1460]: 2025-11-01 00:14:57.527 [INFO][5461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Nov 1 00:14:57.567624 containerd[1460]: 2025-11-01 00:14:57.527 [INFO][5461] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" iface="eth0" netns="" Nov 1 00:14:57.567624 containerd[1460]: 2025-11-01 00:14:57.527 [INFO][5461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Nov 1 00:14:57.567624 containerd[1460]: 2025-11-01 00:14:57.527 [INFO][5461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Nov 1 00:14:57.567624 containerd[1460]: 2025-11-01 00:14:57.551 [INFO][5470] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" HandleID="k8s-pod-network.33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Workload="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:57.567624 containerd[1460]: 2025-11-01 00:14:57.551 [INFO][5470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:57.567624 containerd[1460]: 2025-11-01 00:14:57.552 [INFO][5470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:57.567624 containerd[1460]: 2025-11-01 00:14:57.558 [WARNING][5470] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" HandleID="k8s-pod-network.33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Workload="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:57.567624 containerd[1460]: 2025-11-01 00:14:57.558 [INFO][5470] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" HandleID="k8s-pod-network.33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Workload="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:57.567624 containerd[1460]: 2025-11-01 00:14:57.560 [INFO][5470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:57.567624 containerd[1460]: 2025-11-01 00:14:57.564 [INFO][5461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Nov 1 00:14:57.568226 containerd[1460]: time="2025-11-01T00:14:57.567719815Z" level=info msg="TearDown network for sandbox \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\" successfully" Nov 1 00:14:57.568226 containerd[1460]: time="2025-11-01T00:14:57.567764191Z" level=info msg="StopPodSandbox for \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\" returns successfully" Nov 1 00:14:57.568616 containerd[1460]: time="2025-11-01T00:14:57.568571387Z" level=info msg="RemovePodSandbox for \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\"" Nov 1 00:14:57.568616 containerd[1460]: time="2025-11-01T00:14:57.568612085Z" level=info msg="Forcibly stopping sandbox \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\"" Nov 1 00:14:57.645772 containerd[1460]: 2025-11-01 00:14:57.608 [WARNING][5488] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0", GenerateName:"calico-apiserver-b56b4988b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f11d7d31-f676-4516-b063-ddcb43a2faf5", ResourceVersion:"1245", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b56b4988b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d906a5232d59a08d82f5fb06455edd780df1f8c1e558e7b48c444ed11dc789ba", Pod:"calico-apiserver-b56b4988b-k8vh2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9e6f1000d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:57.645772 containerd[1460]: 2025-11-01 00:14:57.609 [INFO][5488] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Nov 1 00:14:57.645772 containerd[1460]: 2025-11-01 00:14:57.609 [INFO][5488] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" iface="eth0" netns="" Nov 1 00:14:57.645772 containerd[1460]: 2025-11-01 00:14:57.609 [INFO][5488] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Nov 1 00:14:57.645772 containerd[1460]: 2025-11-01 00:14:57.609 [INFO][5488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Nov 1 00:14:57.645772 containerd[1460]: 2025-11-01 00:14:57.630 [INFO][5497] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" HandleID="k8s-pod-network.33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Workload="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:57.645772 containerd[1460]: 2025-11-01 00:14:57.630 [INFO][5497] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:57.645772 containerd[1460]: 2025-11-01 00:14:57.630 [INFO][5497] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:57.645772 containerd[1460]: 2025-11-01 00:14:57.638 [WARNING][5497] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" HandleID="k8s-pod-network.33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Workload="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:57.645772 containerd[1460]: 2025-11-01 00:14:57.638 [INFO][5497] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" HandleID="k8s-pod-network.33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Workload="localhost-k8s-calico--apiserver--b56b4988b--k8vh2-eth0" Nov 1 00:14:57.645772 containerd[1460]: 2025-11-01 00:14:57.640 [INFO][5497] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:57.645772 containerd[1460]: 2025-11-01 00:14:57.642 [INFO][5488] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a" Nov 1 00:14:57.646216 containerd[1460]: time="2025-11-01T00:14:57.645819896Z" level=info msg="TearDown network for sandbox \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\" successfully" Nov 1 00:14:57.650806 containerd[1460]: time="2025-11-01T00:14:57.650764167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:14:57.650891 containerd[1460]: time="2025-11-01T00:14:57.650814083Z" level=info msg="RemovePodSandbox \"33af7f8d4bd14cf12babecbfb6ad49b72310c23daaa595805487e2dee988b16a\" returns successfully" Nov 1 00:14:57.651410 containerd[1460]: time="2025-11-01T00:14:57.651371341Z" level=info msg="StopPodSandbox for \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\"" Nov 1 00:14:57.734493 containerd[1460]: 2025-11-01 00:14:57.688 [WARNING][5515] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--vw47h-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6846208b-d846-430d-8df4-ccfb42c456d3", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274", Pod:"coredns-66bc5c9577-vw47h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21ae2a0746b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:57.734493 containerd[1460]: 2025-11-01 00:14:57.689 [INFO][5515] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Nov 1 00:14:57.734493 containerd[1460]: 2025-11-01 00:14:57.689 [INFO][5515] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" iface="eth0" netns="" Nov 1 00:14:57.734493 containerd[1460]: 2025-11-01 00:14:57.689 [INFO][5515] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Nov 1 00:14:57.734493 containerd[1460]: 2025-11-01 00:14:57.689 [INFO][5515] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Nov 1 00:14:57.734493 containerd[1460]: 2025-11-01 00:14:57.715 [INFO][5524] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" HandleID="k8s-pod-network.ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Workload="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:57.734493 containerd[1460]: 2025-11-01 00:14:57.715 [INFO][5524] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:57.734493 containerd[1460]: 2025-11-01 00:14:57.715 [INFO][5524] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:57.734493 containerd[1460]: 2025-11-01 00:14:57.725 [WARNING][5524] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" HandleID="k8s-pod-network.ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Workload="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:57.734493 containerd[1460]: 2025-11-01 00:14:57.725 [INFO][5524] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" HandleID="k8s-pod-network.ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Workload="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:57.734493 containerd[1460]: 2025-11-01 00:14:57.726 [INFO][5524] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:57.734493 containerd[1460]: 2025-11-01 00:14:57.730 [INFO][5515] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Nov 1 00:14:57.735045 containerd[1460]: time="2025-11-01T00:14:57.734583874Z" level=info msg="TearDown network for sandbox \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\" successfully" Nov 1 00:14:57.735045 containerd[1460]: time="2025-11-01T00:14:57.734622117Z" level=info msg="StopPodSandbox for \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\" returns successfully" Nov 1 00:14:57.735362 containerd[1460]: time="2025-11-01T00:14:57.735314383Z" level=info msg="RemovePodSandbox for \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\"" Nov 1 00:14:57.735362 containerd[1460]: time="2025-11-01T00:14:57.735356002Z" level=info msg="Forcibly stopping sandbox \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\"" Nov 1 00:14:57.813341 containerd[1460]: 2025-11-01 00:14:57.773 [WARNING][5541] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--vw47h-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6846208b-d846-430d-8df4-ccfb42c456d3", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"edb78876aa2368893ec9e3b8e99130ccfa242f5e04c0bf8456af45c700818274", Pod:"coredns-66bc5c9577-vw47h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21ae2a0746b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:14:57.813341 containerd[1460]: 2025-11-01 00:14:57.773 [INFO][5541] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Nov 1 00:14:57.813341 containerd[1460]: 2025-11-01 00:14:57.774 [INFO][5541] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" iface="eth0" netns="" Nov 1 00:14:57.813341 containerd[1460]: 2025-11-01 00:14:57.774 [INFO][5541] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Nov 1 00:14:57.813341 containerd[1460]: 2025-11-01 00:14:57.774 [INFO][5541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Nov 1 00:14:57.813341 containerd[1460]: 2025-11-01 00:14:57.797 [INFO][5549] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" HandleID="k8s-pod-network.ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Workload="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:57.813341 containerd[1460]: 2025-11-01 00:14:57.797 [INFO][5549] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:14:57.813341 containerd[1460]: 2025-11-01 00:14:57.798 [INFO][5549] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:14:57.813341 containerd[1460]: 2025-11-01 00:14:57.804 [WARNING][5549] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" HandleID="k8s-pod-network.ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Workload="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:57.813341 containerd[1460]: 2025-11-01 00:14:57.804 [INFO][5549] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" HandleID="k8s-pod-network.ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Workload="localhost-k8s-coredns--66bc5c9577--vw47h-eth0" Nov 1 00:14:57.813341 containerd[1460]: 2025-11-01 00:14:57.807 [INFO][5549] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:14:57.813341 containerd[1460]: 2025-11-01 00:14:57.810 [INFO][5541] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8" Nov 1 00:14:57.813341 containerd[1460]: time="2025-11-01T00:14:57.813320925Z" level=info msg="TearDown network for sandbox \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\" successfully" Nov 1 00:14:57.817949 containerd[1460]: time="2025-11-01T00:14:57.817877272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:14:57.817949 containerd[1460]: time="2025-11-01T00:14:57.817954079Z" level=info msg="RemovePodSandbox \"ce472c228a337d23515468884f2402c5fb75ebcf05576479476566dbd6a79df8\" returns successfully" Nov 1 00:14:58.697716 containerd[1460]: time="2025-11-01T00:14:58.697645477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:14:59.174062 containerd[1460]: time="2025-11-01T00:14:59.173994320Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:14:59.208980 containerd[1460]: time="2025-11-01T00:14:59.208814725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:14:59.209187 containerd[1460]: time="2025-11-01T00:14:59.208817410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:14:59.209297 kubelet[2506]: E1101 00:14:59.209192 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:14:59.209297 kubelet[2506]: E1101 00:14:59.209260 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:14:59.210069 kubelet[2506]: E1101 00:14:59.209404 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-f77d9cc7f-vdvzf_calico-system(a74773bf-2487-45d4-8b3d-33b1f685360f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:14:59.210726 containerd[1460]: time="2025-11-01T00:14:59.210420307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:14:59.553254 containerd[1460]: time="2025-11-01T00:14:59.553061651Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:14:59.603360 containerd[1460]: time="2025-11-01T00:14:59.603246944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:14:59.603527 containerd[1460]: time="2025-11-01T00:14:59.603298964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:14:59.603790 kubelet[2506]: E1101 00:14:59.603716 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:14:59.603790 kubelet[2506]: E1101 00:14:59.603785 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:14:59.604367 kubelet[2506]: E1101 00:14:59.603893 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-f77d9cc7f-vdvzf_calico-system(a74773bf-2487-45d4-8b3d-33b1f685360f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:14:59.604367 kubelet[2506]: E1101 00:14:59.603955 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f77d9cc7f-vdvzf" podUID="a74773bf-2487-45d4-8b3d-33b1f685360f" Nov 1 00:14:59.697105 containerd[1460]: time="2025-11-01T00:14:59.697052120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:15:00.070662 containerd[1460]: time="2025-11-01T00:15:00.070506269Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:15:00.099957 containerd[1460]: time="2025-11-01T00:15:00.099865405Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:15:00.100138 containerd[1460]: time="2025-11-01T00:15:00.099900261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:15:00.100276 kubelet[2506]: E1101 00:15:00.100217 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:15:00.100359 kubelet[2506]: E1101 00:15:00.100286 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:15:00.100429 kubelet[2506]: E1101 00:15:00.100401 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b56b4988b-mnxsg_calico-apiserver(42d6452e-a1e5-4daf-80fd-e1f205f5b03a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:15:00.100529 kubelet[2506]: E1101 00:15:00.100449 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b56b4988b-mnxsg" podUID="42d6452e-a1e5-4daf-80fd-e1f205f5b03a" Nov 1 00:15:01.697571 containerd[1460]: time="2025-11-01T00:15:01.697486169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:15:02.046098 containerd[1460]: time="2025-11-01T00:15:02.045881487Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:15:02.049082 containerd[1460]: time="2025-11-01T00:15:02.049009403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:15:02.049162 containerd[1460]: time="2025-11-01T00:15:02.049056833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:15:02.049427 kubelet[2506]: E1101 00:15:02.049353 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:15:02.049427 kubelet[2506]: E1101 00:15:02.049431 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:15:02.050074 kubelet[2506]: E1101 00:15:02.049550 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xs644_calico-system(e75afe96-48a0-4769-9bc6-591261c95345): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:15:02.050074 kubelet[2506]: E1101 00:15:02.049598 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xs644" podUID="e75afe96-48a0-4769-9bc6-591261c95345" Nov 1 00:15:02.188605 systemd[1]: Started sshd@14-10.0.0.19:22-10.0.0.1:34536.service - OpenSSH per-connection server daemon (10.0.0.1:34536). Nov 1 00:15:02.226210 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 34536 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:15:02.228404 sshd[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:15:02.233068 systemd-logind[1438]: New session 15 of user core. Nov 1 00:15:02.242922 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:15:02.365240 sshd[5563]: pam_unix(sshd:session): session closed for user core Nov 1 00:15:02.371092 systemd[1]: sshd@14-10.0.0.19:22-10.0.0.1:34536.service: Deactivated successfully. Nov 1 00:15:02.374493 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:15:02.375531 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:15:02.377413 systemd-logind[1438]: Removed session 15. Nov 1 00:15:06.697133 kubelet[2506]: E1101 00:15:06.697036 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b56b4988b-k8vh2" podUID="f11d7d31-f676-4516-b063-ddcb43a2faf5" Nov 1 00:15:07.381071 systemd[1]: Started sshd@15-10.0.0.19:22-10.0.0.1:52986.service - OpenSSH per-connection server daemon (10.0.0.1:52986). Nov 1 00:15:07.424388 sshd[5587]: Accepted publickey for core from 10.0.0.1 port 52986 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:15:07.426737 sshd[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:15:07.432062 systemd-logind[1438]: New session 16 of user core. Nov 1 00:15:07.438880 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:15:07.576587 sshd[5587]: pam_unix(sshd:session): session closed for user core Nov 1 00:15:07.581250 systemd[1]: sshd@15-10.0.0.19:22-10.0.0.1:52986.service: Deactivated successfully. Nov 1 00:15:07.584169 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:15:07.585041 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:15:07.586152 systemd-logind[1438]: Removed session 16. Nov 1 00:15:07.703162 kubelet[2506]: E1101 00:15:07.703059 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ln244" podUID="d5658fcc-61ca-4e96-9f79-25e33876cacb" Nov 1 00:15:10.705361 kubelet[2506]: E1101 00:15:10.705308 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558d5b9ff5-tdbvc" podUID="9265ab6d-1d0a-42f4-baa7-12e5c42cad61" Nov 1 00:15:11.697854 kubelet[2506]: E1101 00:15:11.697751 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b56b4988b-mnxsg" podUID="42d6452e-a1e5-4daf-80fd-e1f205f5b03a" Nov 1 00:15:12.458326 kubelet[2506]: E1101 00:15:12.458281 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:15:12.613952 systemd[1]: Started sshd@16-10.0.0.19:22-10.0.0.1:52988.service - OpenSSH per-connection server daemon (10.0.0.1:52988). Nov 1 00:15:12.686967 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 52988 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:15:12.688631 sshd[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:15:12.698645 kubelet[2506]: E1101 00:15:12.698515 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f77d9cc7f-vdvzf" podUID="a74773bf-2487-45d4-8b3d-33b1f685360f" Nov 1 00:15:12.699968 systemd-logind[1438]: New session 17 of user core. Nov 1 00:15:12.712436 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:15:12.922491 sshd[5629]: pam_unix(sshd:session): session closed for user core Nov 1 00:15:12.926965 systemd[1]: sshd@16-10.0.0.19:22-10.0.0.1:52988.service: Deactivated successfully. Nov 1 00:15:12.930007 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:15:12.930806 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:15:12.932067 systemd-logind[1438]: Removed session 17. Nov 1 00:15:13.699374 kubelet[2506]: E1101 00:15:13.699017 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xs644" podUID="e75afe96-48a0-4769-9bc6-591261c95345" Nov 1 00:15:15.696926 kubelet[2506]: E1101 00:15:15.696868 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:15:18.028176 systemd[1]: Started sshd@17-10.0.0.19:22-10.0.0.1:39838.service - OpenSSH per-connection server daemon (10.0.0.1:39838). Nov 1 00:15:18.199751 sshd[5646]: Accepted publickey for core from 10.0.0.1 port 39838 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:15:18.203204 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:15:18.212564 systemd-logind[1438]: New session 18 of user core. Nov 1 00:15:18.233166 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:15:18.556608 sshd[5646]: pam_unix(sshd:session): session closed for user core Nov 1 00:15:18.593121 systemd[1]: sshd@17-10.0.0.19:22-10.0.0.1:39838.service: Deactivated successfully. Nov 1 00:15:18.603056 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:15:18.606083 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:15:18.607619 systemd-logind[1438]: Removed session 18. Nov 1 00:15:19.707107 containerd[1460]: time="2025-11-01T00:15:19.705580313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:15:20.071649 containerd[1460]: time="2025-11-01T00:15:20.071212738Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:15:20.091364 containerd[1460]: time="2025-11-01T00:15:20.090984940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:15:20.091576 containerd[1460]: time="2025-11-01T00:15:20.091528581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:15:20.093263 kubelet[2506]: E1101 00:15:20.091749 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:15:20.093263 kubelet[2506]: E1101 00:15:20.091809 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:15:20.093263 kubelet[2506]: E1101 00:15:20.091919 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ln244_calico-system(d5658fcc-61ca-4e96-9f79-25e33876cacb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:15:20.093901 containerd[1460]: time="2025-11-01T00:15:20.092951942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:15:20.435599 containerd[1460]: time="2025-11-01T00:15:20.435501375Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:15:20.447449 containerd[1460]: time="2025-11-01T00:15:20.447268055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:15:20.447449 containerd[1460]: time="2025-11-01T00:15:20.447422398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:15:20.447805 kubelet[2506]: E1101 00:15:20.447744 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:15:20.447879 kubelet[2506]: E1101 00:15:20.447817 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:15:20.448032 kubelet[2506]: E1101 00:15:20.447932 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ln244_calico-system(d5658fcc-61ca-4e96-9f79-25e33876cacb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:15:20.448032 kubelet[2506]: E1101 00:15:20.448005 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ln244" podUID="d5658fcc-61ca-4e96-9f79-25e33876cacb" Nov 1 00:15:21.699908 kubelet[2506]: E1101 00:15:21.697260 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:15:21.703779 kubelet[2506]: E1101 00:15:21.700816 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:15:21.703941 containerd[1460]: time="2025-11-01T00:15:21.702808716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:15:22.101875 containerd[1460]: time="2025-11-01T00:15:22.101070727Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:15:22.107894 containerd[1460]: time="2025-11-01T00:15:22.107799600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:15:22.108134 containerd[1460]: time="2025-11-01T00:15:22.107890111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:15:22.108191 kubelet[2506]: E1101 00:15:22.108056 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:15:22.108191 kubelet[2506]: E1101 00:15:22.108113 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:15:22.108316 kubelet[2506]: E1101 00:15:22.108238 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b56b4988b-k8vh2_calico-apiserver(f11d7d31-f676-4516-b063-ddcb43a2faf5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:15:22.108387 kubelet[2506]: E1101 00:15:22.108341 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b56b4988b-k8vh2" podUID="f11d7d31-f676-4516-b063-ddcb43a2faf5" Nov 1 00:15:23.578174 systemd[1]: Started sshd@18-10.0.0.19:22-10.0.0.1:39848.service - OpenSSH per-connection server daemon (10.0.0.1:39848). Nov 1 00:15:23.620733 sshd[5669]: Accepted publickey for core from 10.0.0.1 port 39848 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:15:23.624258 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:15:23.630369 systemd-logind[1438]: New session 19 of user core. Nov 1 00:15:23.638948 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:15:23.700102 containerd[1460]: time="2025-11-01T00:15:23.699665713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:15:23.821786 sshd[5669]: pam_unix(sshd:session): session closed for user core Nov 1 00:15:23.835036 systemd[1]: sshd@18-10.0.0.19:22-10.0.0.1:39848.service: Deactivated successfully. Nov 1 00:15:23.837656 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:15:23.842186 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:15:23.853297 systemd[1]: Started sshd@19-10.0.0.19:22-10.0.0.1:39854.service - OpenSSH per-connection server daemon (10.0.0.1:39854). Nov 1 00:15:23.856108 systemd-logind[1438]: Removed session 19. Nov 1 00:15:23.892742 sshd[5684]: Accepted publickey for core from 10.0.0.1 port 39854 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:15:23.896719 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:15:23.921945 systemd-logind[1438]: New session 20 of user core. Nov 1 00:15:23.930479 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:15:24.074811 containerd[1460]: time="2025-11-01T00:15:24.073549513Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:15:24.078739 containerd[1460]: time="2025-11-01T00:15:24.078186574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:15:24.078739 containerd[1460]: time="2025-11-01T00:15:24.078331869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:15:24.079004 kubelet[2506]: E1101 00:15:24.078911 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:15:24.079004 kubelet[2506]: E1101 00:15:24.078975 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:15:24.079725 kubelet[2506]: E1101 00:15:24.079067 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b56b4988b-mnxsg_calico-apiserver(42d6452e-a1e5-4daf-80fd-e1f205f5b03a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:15:24.079725 kubelet[2506]: E1101 00:15:24.079105 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b56b4988b-mnxsg" podUID="42d6452e-a1e5-4daf-80fd-e1f205f5b03a" Nov 1 00:15:24.698255 containerd[1460]: time="2025-11-01T00:15:24.698126835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:15:24.812273 sshd[5684]: pam_unix(sshd:session): session closed for user core Nov 1 00:15:24.828782 systemd[1]: sshd@19-10.0.0.19:22-10.0.0.1:39854.service: Deactivated successfully. Nov 1 00:15:24.832535 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:15:24.835270 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:15:24.846382 systemd[1]: Started sshd@20-10.0.0.19:22-10.0.0.1:39860.service - OpenSSH per-connection server daemon (10.0.0.1:39860). Nov 1 00:15:24.848176 systemd-logind[1438]: Removed session 20. Nov 1 00:15:24.928405 sshd[5697]: Accepted publickey for core from 10.0.0.1 port 39860 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:15:24.940900 sshd[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:15:24.955953 systemd-logind[1438]: New session 21 of user core. Nov 1 00:15:24.981336 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:15:25.214795 containerd[1460]: time="2025-11-01T00:15:25.214137163Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:15:25.217071 containerd[1460]: time="2025-11-01T00:15:25.216858641Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:15:25.217071 containerd[1460]: time="2025-11-01T00:15:25.217007934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:15:25.219077 kubelet[2506]: E1101 00:15:25.217407 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:15:25.219077 kubelet[2506]: E1101 00:15:25.217481 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:15:25.219077 kubelet[2506]: E1101 00:15:25.217589 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-f77d9cc7f-vdvzf_calico-system(a74773bf-2487-45d4-8b3d-33b1f685360f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:15:25.225043 containerd[1460]: time="2025-11-01T00:15:25.220986805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:15:25.569083 containerd[1460]: time="2025-11-01T00:15:25.566765806Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:15:25.613226 containerd[1460]: time="2025-11-01T00:15:25.612858432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:15:25.613226 containerd[1460]: time="2025-11-01T00:15:25.612987426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:15:25.613468 kubelet[2506]: E1101 00:15:25.613410 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:15:25.613536 kubelet[2506]: E1101 00:15:25.613482 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:15:25.613664 kubelet[2506]: E1101 00:15:25.613614 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-f77d9cc7f-vdvzf_calico-system(a74773bf-2487-45d4-8b3d-33b1f685360f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:15:25.614332 kubelet[2506]: E1101 00:15:25.613731 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f77d9cc7f-vdvzf" podUID="a74773bf-2487-45d4-8b3d-33b1f685360f" Nov 1 00:15:25.704975 containerd[1460]: time="2025-11-01T00:15:25.704627866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:15:26.194579 containerd[1460]: time="2025-11-01T00:15:26.194497356Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:15:26.197925 containerd[1460]: time="2025-11-01T00:15:26.197865519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:15:26.198045 containerd[1460]: time="2025-11-01T00:15:26.197993731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:15:26.198987 kubelet[2506]: E1101 00:15:26.198246 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:15:26.198987 kubelet[2506]: E1101 00:15:26.198323 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:15:26.198987 kubelet[2506]: E1101 00:15:26.198444 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-558d5b9ff5-tdbvc_calico-system(9265ab6d-1d0a-42f4-baa7-12e5c42cad61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:15:26.198987 kubelet[2506]: E1101 00:15:26.198489 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558d5b9ff5-tdbvc" podUID="9265ab6d-1d0a-42f4-baa7-12e5c42cad61" Nov 1 00:15:26.237462 sshd[5697]: pam_unix(sshd:session): session closed for user core Nov 1 00:15:26.248471 systemd[1]: sshd@20-10.0.0.19:22-10.0.0.1:39860.service: Deactivated successfully. Nov 1 00:15:26.251099 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:15:26.253081 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:15:26.260154 systemd[1]: Started sshd@21-10.0.0.19:22-10.0.0.1:56384.service - OpenSSH per-connection server daemon (10.0.0.1:56384). Nov 1 00:15:26.261600 systemd-logind[1438]: Removed session 21. Nov 1 00:15:26.307705 sshd[5738]: Accepted publickey for core from 10.0.0.1 port 56384 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:15:26.310142 sshd[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:15:26.327777 systemd-logind[1438]: New session 22 of user core. Nov 1 00:15:26.335148 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:15:26.701035 containerd[1460]: time="2025-11-01T00:15:26.700716544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:15:26.825086 sshd[5738]: pam_unix(sshd:session): session closed for user core Nov 1 00:15:26.848576 systemd[1]: sshd@21-10.0.0.19:22-10.0.0.1:56384.service: Deactivated successfully. Nov 1 00:15:26.856005 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:15:26.858944 systemd-logind[1438]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:15:26.876301 systemd[1]: Started sshd@22-10.0.0.19:22-10.0.0.1:56392.service - OpenSSH per-connection server daemon (10.0.0.1:56392). Nov 1 00:15:26.879118 systemd-logind[1438]: Removed session 22. Nov 1 00:15:26.931509 sshd[5750]: Accepted publickey for core from 10.0.0.1 port 56392 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:15:26.934085 sshd[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:15:26.941035 systemd-logind[1438]: New session 23 of user core. Nov 1 00:15:26.951038 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:15:27.069108 containerd[1460]: time="2025-11-01T00:15:27.069025404Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:15:27.115397 containerd[1460]: time="2025-11-01T00:15:27.115272002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:15:27.115592 containerd[1460]: time="2025-11-01T00:15:27.115418398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:15:27.115920 kubelet[2506]: E1101 00:15:27.115851 2506 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:15:27.116302 kubelet[2506]: E1101 00:15:27.115929 2506 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:15:27.116302 kubelet[2506]: E1101 00:15:27.116031 2506 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xs644_calico-system(e75afe96-48a0-4769-9bc6-591261c95345): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:15:27.116302 kubelet[2506]: E1101 00:15:27.116067 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xs644" podUID="e75afe96-48a0-4769-9bc6-591261c95345" Nov 1 00:15:27.146736 sshd[5750]: pam_unix(sshd:session): session closed for user core Nov 1 00:15:27.153419 systemd[1]: sshd@22-10.0.0.19:22-10.0.0.1:56392.service: Deactivated successfully. Nov 1 00:15:27.157272 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:15:27.158319 systemd-logind[1438]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:15:27.163075 systemd-logind[1438]: Removed session 23. Nov 1 00:15:29.696357 kubelet[2506]: E1101 00:15:29.696272 2506 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:15:32.159990 systemd[1]: Started sshd@23-10.0.0.19:22-10.0.0.1:56404.service - OpenSSH per-connection server daemon (10.0.0.1:56404). Nov 1 00:15:32.216617 sshd[5766]: Accepted publickey for core from 10.0.0.1 port 56404 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:15:32.219083 sshd[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:15:32.224731 systemd-logind[1438]: New session 24 of user core. Nov 1 00:15:32.238990 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 00:15:32.370291 sshd[5766]: pam_unix(sshd:session): session closed for user core Nov 1 00:15:32.374414 systemd[1]: sshd@23-10.0.0.19:22-10.0.0.1:56404.service: Deactivated successfully. Nov 1 00:15:32.377457 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:15:32.380966 systemd-logind[1438]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:15:32.382352 systemd-logind[1438]: Removed session 24. Nov 1 00:15:32.697821 kubelet[2506]: E1101 00:15:32.697736 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ln244" podUID="d5658fcc-61ca-4e96-9f79-25e33876cacb" Nov 1 00:15:35.698424 kubelet[2506]: E1101 00:15:35.698370 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b56b4988b-mnxsg" podUID="42d6452e-a1e5-4daf-80fd-e1f205f5b03a" Nov 1 00:15:36.697765 kubelet[2506]: E1101 00:15:36.697661 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b56b4988b-k8vh2" podUID="f11d7d31-f676-4516-b063-ddcb43a2faf5" Nov 1 00:15:37.393270 systemd[1]: Started sshd@24-10.0.0.19:22-10.0.0.1:60150.service - OpenSSH per-connection server daemon (10.0.0.1:60150). Nov 1 00:15:37.424729 sshd[5784]: Accepted publickey for core from 10.0.0.1 port 60150 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:15:37.426811 sshd[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:15:37.431662 systemd-logind[1438]: New session 25 of user core. Nov 1 00:15:37.438055 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 00:15:37.569239 sshd[5784]: pam_unix(sshd:session): session closed for user core Nov 1 00:15:37.574181 systemd[1]: sshd@24-10.0.0.19:22-10.0.0.1:60150.service: Deactivated successfully. Nov 1 00:15:37.576749 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:15:37.577521 systemd-logind[1438]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:15:37.579146 systemd-logind[1438]: Removed session 25. Nov 1 00:15:40.699203 kubelet[2506]: E1101 00:15:40.699136 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f77d9cc7f-vdvzf" podUID="a74773bf-2487-45d4-8b3d-33b1f685360f" Nov 1 00:15:41.698125 kubelet[2506]: E1101 00:15:41.697831 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558d5b9ff5-tdbvc" podUID="9265ab6d-1d0a-42f4-baa7-12e5c42cad61" Nov 1 00:15:41.698125 kubelet[2506]: E1101 00:15:41.697962 2506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xs644" podUID="e75afe96-48a0-4769-9bc6-591261c95345" Nov 1 00:15:42.316285 systemd[1]: run-containerd-runc-k8s.io-7fe70780ce5df6b87368dbd885c4a656165677fce2a3b39354951f8ac9c354e7-runc.QAutkU.mount: Deactivated successfully. Nov 1 00:15:42.585421 systemd[1]: Started sshd@25-10.0.0.19:22-10.0.0.1:60154.service - OpenSSH per-connection server daemon (10.0.0.1:60154). Nov 1 00:15:42.665052 sshd[5822]: Accepted publickey for core from 10.0.0.1 port 60154 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:15:42.667620 sshd[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:15:42.673491 systemd-logind[1438]: New session 26 of user core. Nov 1 00:15:42.684002 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 00:15:42.894942 sshd[5822]: pam_unix(sshd:session): session closed for user core Nov 1 00:15:42.902503 systemd-logind[1438]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:15:42.903391 systemd[1]: sshd@25-10.0.0.19:22-10.0.0.1:60154.service: Deactivated successfully. Nov 1 00:15:42.906736 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:15:42.907608 systemd-logind[1438]: Removed session 26.