Nov 8 00:23:13.953185 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:23:13.953206 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:13.953218 kernel: BIOS-provided physical RAM map: Nov 8 00:23:13.953224 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 8 00:23:13.953230 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 8 00:23:13.953236 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:23:13.953243 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 8 00:23:13.953250 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 8 00:23:13.953256 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:23:13.953265 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:23:13.953271 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:23:13.953277 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:23:13.953288 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 8 00:23:13.953294 kernel: NX (Execute Disable) protection: active Nov 8 00:23:13.953302 kernel: APIC: Static calls initialized Nov 8 00:23:13.953314 kernel: SMBIOS 2.8 present. Nov 8 00:23:13.953321 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 8 00:23:13.953327 kernel: Hypervisor detected: KVM Nov 8 00:23:13.953334 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:23:13.953341 kernel: kvm-clock: using sched offset of 3485767619 cycles Nov 8 00:23:13.953348 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:23:13.953355 kernel: tsc: Detected 2794.748 MHz processor Nov 8 00:23:13.953362 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:23:13.953370 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:23:13.953577 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 8 00:23:13.953584 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:23:13.953591 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:23:13.953598 kernel: Using GB pages for direct mapping Nov 8 00:23:13.953605 kernel: ACPI: Early table checksum verification disabled Nov 8 00:23:13.953611 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 8 00:23:13.953618 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:23:13.953625 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:23:13.953632 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:23:13.953641 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 8 00:23:13.953648 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:23:13.953655 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:23:13.953662 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:23:13.953669 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:23:13.953676 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 8 00:23:13.953684 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 8 00:23:13.953698 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 8 00:23:13.953710 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 8 00:23:13.953718 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 8 00:23:13.953726 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 8 00:23:13.953733 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 8 00:23:13.953740 kernel: No NUMA configuration found Nov 8 00:23:13.953747 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 8 00:23:13.953757 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 8 00:23:13.953764 kernel: Zone ranges: Nov 8 00:23:13.953771 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:23:13.953778 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 8 00:23:13.953785 kernel: Normal empty Nov 8 00:23:13.953792 kernel: Movable zone start for each node Nov 8 00:23:13.953800 kernel: Early memory node ranges Nov 8 00:23:13.953807 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:23:13.953814 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 8 00:23:13.953821 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 8 00:23:13.953833 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:23:13.953844 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:23:13.953853 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 8 00:23:13.953860 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:23:13.953868 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:23:13.953875 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:23:13.953882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:23:13.953889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:23:13.953921 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:23:13.953932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:23:13.953940 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:23:13.953947 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:23:13.953954 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:23:13.953961 kernel: TSC deadline timer available Nov 8 00:23:13.953968 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 8 00:23:13.953976 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:23:13.953983 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:23:13.953992 kernel: kvm-guest: setup PV sched yield Nov 8 00:23:13.954002 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:23:13.954009 kernel: Booting paravirtualized kernel on KVM Nov 8 00:23:13.954016 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:23:13.954024 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 8 00:23:13.954031 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 8 00:23:13.954038 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 8 00:23:13.954045 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 8 00:23:13.954052 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:23:13.954059 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:23:13.954070 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:13.954077 kernel: random: crng init done Nov 8 00:23:13.954085 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:23:13.954092 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:23:13.954099 kernel: Fallback order for Node 0: 0 Nov 8 00:23:13.954106 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 8 00:23:13.954113 kernel: Policy zone: DMA32 Nov 8 00:23:13.954120 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:23:13.954130 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 136900K reserved, 0K cma-reserved) Nov 8 00:23:13.954137 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 8 00:23:13.954145 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:23:13.954152 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:23:13.954159 kernel: Dynamic Preempt: voluntary Nov 8 00:23:13.954166 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:23:13.954174 kernel: rcu: RCU event tracing is enabled. Nov 8 00:23:13.954181 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 8 00:23:13.954188 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:23:13.954198 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:23:13.954205 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:23:13.954213 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:23:13.954220 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 8 00:23:13.954229 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 8 00:23:13.954237 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:23:13.954244 kernel: Console: colour VGA+ 80x25 Nov 8 00:23:13.954251 kernel: printk: console [ttyS0] enabled Nov 8 00:23:13.954258 kernel: ACPI: Core revision 20230628 Nov 8 00:23:13.954268 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:23:13.954275 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:23:13.954282 kernel: x2apic enabled Nov 8 00:23:13.954289 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:23:13.954296 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:23:13.954304 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:23:13.954311 kernel: kvm-guest: setup PV IPIs Nov 8 00:23:13.954318 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:23:13.954335 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:23:13.954343 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 8 00:23:13.954350 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:23:13.954358 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:23:13.954367 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:23:13.954375 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:23:13.954382 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:23:13.954390 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:23:13.954398 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 8 00:23:13.954408 kernel: active return thunk: retbleed_return_thunk Nov 8 00:23:13.954415 kernel: RETBleed: Mitigation: untrained return thunk Nov 8 00:23:13.954425 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:23:13.954432 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:23:13.954440 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:23:13.954448 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:23:13.954455 kernel: active return thunk: srso_return_thunk Nov 8 00:23:13.954463 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:23:13.954473 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:23:13.954480 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:23:13.954488 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:23:13.954502 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:23:13.954510 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:23:13.954518 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:23:13.954525 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:23:13.954533 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:23:13.954540 kernel: landlock: Up and running. Nov 8 00:23:13.954551 kernel: SELinux: Initializing. Nov 8 00:23:13.954559 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:23:13.954566 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:23:13.954574 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 8 00:23:13.954581 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:23:13.954589 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:23:13.954597 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:23:13.954604 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:23:13.954614 kernel: ... version: 0 Nov 8 00:23:13.954624 kernel: ... bit width: 48 Nov 8 00:23:13.954631 kernel: ... generic registers: 6 Nov 8 00:23:13.954639 kernel: ... value mask: 0000ffffffffffff Nov 8 00:23:13.954646 kernel: ... max period: 00007fffffffffff Nov 8 00:23:13.954653 kernel: ... fixed-purpose events: 0 Nov 8 00:23:13.954661 kernel: ... event mask: 000000000000003f Nov 8 00:23:13.954669 kernel: signal: max sigframe size: 1776 Nov 8 00:23:13.954676 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:23:13.954685 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:23:13.954699 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:23:13.954709 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:23:13.954717 kernel: .... node #0, CPUs: #1 #2 #3 Nov 8 00:23:13.954724 kernel: smp: Brought up 1 node, 4 CPUs Nov 8 00:23:13.954732 kernel: smpboot: Max logical packages: 1 Nov 8 00:23:13.954739 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 8 00:23:13.954747 kernel: devtmpfs: initialized Nov 8 00:23:13.954754 kernel: x86/mm: Memory block size: 128MB Nov 8 00:23:13.954762 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:23:13.954772 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 8 00:23:13.954779 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:23:13.954787 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:23:13.954794 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:23:13.954802 kernel: audit: type=2000 audit(1762561393.367:1): state=initialized audit_enabled=0 res=1 Nov 8 00:23:13.954810 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:23:13.954818 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:23:13.954825 kernel: cpuidle: using governor menu Nov 8 00:23:13.954832 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:23:13.954842 kernel: dca service started, version 1.12.1 Nov 8 00:23:13.954850 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:23:13.954857 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:23:13.954865 kernel: PCI: Using configuration type 1 for base access Nov 8 00:23:13.954872 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:23:13.954880 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:23:13.954888 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:23:13.954907 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:23:13.954915 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:23:13.954926 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:23:13.954933 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:23:13.954941 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:23:13.954948 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:23:13.954955 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:23:13.954963 kernel: ACPI: Interpreter enabled Nov 8 00:23:13.954970 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:23:13.954977 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:23:13.954985 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:23:13.954995 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:23:13.955002 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:23:13.955010 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:23:13.955219 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:23:13.955359 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:23:13.955486 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:23:13.955506 kernel: PCI host bridge to bus 0000:00 Nov 8 00:23:13.955645 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:23:13.955778 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:23:13.955936 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:23:13.956058 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 8 00:23:13.956172 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:23:13.956288 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 8 00:23:13.956403 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:23:13.956581 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:23:13.956735 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:23:13.956868 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 8 00:23:13.957015 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 8 00:23:13.957141 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 8 00:23:13.957265 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:23:13.957413 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 8 00:23:13.957559 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 8 00:23:13.957687 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 8 00:23:13.957813 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 8 00:23:13.957969 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:23:13.958100 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 8 00:23:13.958227 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 8 00:23:13.958353 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 8 00:23:13.958510 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:23:13.958640 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 8 00:23:13.958767 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 8 00:23:13.958892 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 8 00:23:13.959036 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 8 00:23:13.959176 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:23:13.959309 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:23:13.959453 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:23:13.959591 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 8 00:23:13.959717 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 8 00:23:13.959855 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:23:13.960016 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:23:13.960028 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:23:13.960040 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:23:13.960048 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:23:13.960055 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:23:13.960062 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:23:13.960070 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:23:13.960077 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:23:13.960085 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:23:13.960092 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:23:13.960100 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:23:13.960110 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:23:13.960118 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:23:13.960125 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:23:13.960133 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:23:13.960140 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:23:13.960148 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:23:13.960155 kernel: iommu: Default domain type: Translated Nov 8 00:23:13.960162 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:23:13.960170 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:23:13.960180 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:23:13.960187 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 8 00:23:13.960195 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 8 00:23:13.960321 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:23:13.960447 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:23:13.960581 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:23:13.960592 kernel: vgaarb: loaded Nov 8 00:23:13.960599 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:23:13.960611 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:23:13.960618 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:23:13.960626 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:23:13.960634 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:23:13.960641 kernel: pnp: PnP ACPI init Nov 8 00:23:13.960797 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:23:13.960809 kernel: pnp: PnP ACPI: found 6 devices Nov 8 00:23:13.960816 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:23:13.960824 kernel: NET: Registered PF_INET protocol family Nov 8 00:23:13.960836 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:23:13.960846 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:23:13.960854 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:23:13.960864 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:23:13.960871 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:23:13.960879 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:23:13.960887 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:23:13.960907 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:23:13.960918 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:23:13.960926 kernel: NET: Registered PF_XDP protocol family Nov 8 00:23:13.961046 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:23:13.961162 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:23:13.961277 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:23:13.961392 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 8 00:23:13.961515 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:23:13.961631 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 8 00:23:13.961641 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:23:13.961653 kernel: Initialise system trusted keyrings Nov 8 00:23:13.961660 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:23:13.961668 kernel: Key type asymmetric registered Nov 8 00:23:13.961675 kernel: Asymmetric key parser 'x509' registered Nov 8 00:23:13.961683 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:23:13.961691 kernel: io scheduler mq-deadline registered Nov 8 00:23:13.961698 kernel: io scheduler kyber registered Nov 8 00:23:13.961706 kernel: io scheduler bfq registered Nov 8 00:23:13.961713 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:23:13.961723 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:23:13.961731 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:23:13.961739 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 8 00:23:13.961746 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:23:13.961754 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:23:13.961761 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:23:13.961769 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:23:13.961776 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:23:13.961932 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 8 00:23:13.961948 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:23:13.962069 kernel: rtc_cmos 00:04: registered as rtc0 Nov 8 00:23:13.962188 kernel: rtc_cmos 00:04: setting system clock to 2025-11-08T00:23:13 UTC (1762561393) Nov 8 00:23:13.962306 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:23:13.962317 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:23:13.962324 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:23:13.962332 kernel: Segment Routing with IPv6 Nov 8 00:23:13.962339 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:23:13.962350 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:23:13.962358 kernel: Key type dns_resolver registered Nov 8 00:23:13.962365 kernel: IPI shorthand broadcast: enabled Nov 8 00:23:13.962373 kernel: sched_clock: Marking stable (910001654, 194120268)->(1157434442, -53312520) Nov 8 00:23:13.962380 kernel: registered taskstats version 1 Nov 8 00:23:13.962388 kernel: Loading compiled-in X.509 certificates Nov 8 00:23:13.962396 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:23:13.962403 kernel: Key type .fscrypt registered Nov 8 00:23:13.962411 kernel: Key type fscrypt-provisioning registered Nov 8 00:23:13.962421 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:23:13.962428 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:23:13.962436 kernel: ima: No architecture policies found Nov 8 00:23:13.962443 kernel: clk: Disabling unused clocks Nov 8 00:23:13.962450 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:23:13.962458 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:23:13.962466 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:23:13.962473 kernel: Run /init as init process Nov 8 00:23:13.962480 kernel: with arguments: Nov 8 00:23:13.962490 kernel: /init Nov 8 00:23:13.962506 kernel: with environment: Nov 8 00:23:13.962514 kernel: HOME=/ Nov 8 00:23:13.962521 kernel: TERM=linux Nov 8 00:23:13.962531 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:23:13.962541 systemd[1]: Detected virtualization kvm. Nov 8 00:23:13.962549 systemd[1]: Detected architecture x86-64. Nov 8 00:23:13.962557 systemd[1]: Running in initrd. Nov 8 00:23:13.962567 systemd[1]: No hostname configured, using default hostname. Nov 8 00:23:13.962575 systemd[1]: Hostname set to . Nov 8 00:23:13.962584 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:23:13.962592 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:23:13.962600 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:13.962608 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:13.962617 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:23:13.962625 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:23:13.962636 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:23:13.962656 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:23:13.962668 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:23:13.962676 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:23:13.962689 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:13.962697 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:13.962706 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:23:13.962714 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:23:13.962722 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:23:13.962730 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:23:13.962739 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:23:13.962747 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:23:13.962755 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:23:13.962766 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:23:13.962774 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:13.962783 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:13.962791 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:13.962799 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:23:13.962807 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:23:13.962816 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:23:13.962824 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:23:13.962835 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:23:13.962843 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:23:13.962851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:23:13.962859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:13.962868 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:23:13.962876 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:13.962884 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:23:13.962928 systemd-journald[193]: Collecting audit messages is disabled. Nov 8 00:23:13.962947 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:23:13.962959 systemd-journald[193]: Journal started Nov 8 00:23:13.962976 systemd-journald[193]: Runtime Journal (/run/log/journal/de8b385436a1451a84785741553dd6a4) is 6.0M, max 48.4M, 42.3M free. Nov 8 00:23:13.945219 systemd-modules-load[194]: Inserted module 'overlay' Nov 8 00:23:14.027951 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:23:14.027985 kernel: Bridge firewalling registered Nov 8 00:23:14.028011 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:23:13.973271 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 8 00:23:14.028948 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:14.031985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:14.047115 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:14.052116 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:23:14.056301 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:23:14.061086 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:23:14.067049 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:14.071699 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:14.075550 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:14.094123 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:23:14.097062 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:23:14.101868 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:23:14.110965 dracut-cmdline[226]: dracut-dracut-053 Nov 8 00:23:14.115073 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:14.126149 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:14.138286 systemd-resolved[227]: Positive Trust Anchors: Nov 8 00:23:14.138299 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:23:14.138330 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:23:14.140991 systemd-resolved[227]: Defaulting to hostname 'linux'. Nov 8 00:23:14.142155 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:23:14.144350 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:14.211943 kernel: SCSI subsystem initialized Nov 8 00:23:14.220932 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:23:14.232921 kernel: iscsi: registered transport (tcp) Nov 8 00:23:14.254178 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:23:14.254212 kernel: QLogic iSCSI HBA Driver Nov 8 00:23:14.301997 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:23:14.315038 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:23:14.345199 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:23:14.345245 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:23:14.346775 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:23:14.388947 kernel: raid6: avx2x4 gen() 29400 MB/s Nov 8 00:23:14.405942 kernel: raid6: avx2x2 gen() 31029 MB/s Nov 8 00:23:14.423693 kernel: raid6: avx2x1 gen() 25429 MB/s Nov 8 00:23:14.423735 kernel: raid6: using algorithm avx2x2 gen() 31029 MB/s Nov 8 00:23:14.442069 kernel: raid6: .... xor() 18334 MB/s, rmw enabled Nov 8 00:23:14.442147 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:23:14.465931 kernel: xor: automatically using best checksumming function avx Nov 8 00:23:14.623954 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:23:14.638412 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:23:14.649077 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:14.663029 systemd-udevd[413]: Using default interface naming scheme 'v255'. Nov 8 00:23:14.667687 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:14.679138 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:23:14.694337 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Nov 8 00:23:14.729281 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:23:14.747154 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:23:14.816230 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:14.821092 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:23:14.837390 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:23:14.842447 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:23:14.846443 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:14.848541 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:23:14.858279 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:23:14.868912 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 8 00:23:14.872875 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 8 00:23:14.872826 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:23:14.886857 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:23:14.886875 kernel: GPT:9289727 != 19775487 Nov 8 00:23:14.886886 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:23:14.886909 kernel: GPT:9289727 != 19775487 Nov 8 00:23:14.886919 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:23:14.886930 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:23:14.891939 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:23:14.891966 kernel: libata version 3.00 loaded. Nov 8 00:23:14.895459 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:23:14.898640 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:14.905341 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:23:14.905565 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:23:14.905827 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:14.913133 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:23:14.913553 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:23:14.916006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:14.918344 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:14.923814 kernel: scsi host0: ahci Nov 8 00:23:14.924014 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:23:14.924026 kernel: scsi host1: ahci Nov 8 00:23:14.924187 kernel: AES CTR mode by8 optimization enabled Nov 8 00:23:14.926590 kernel: scsi host2: ahci Nov 8 00:23:14.926821 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (465) Nov 8 00:23:14.929909 kernel: scsi host3: ahci Nov 8 00:23:14.930337 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:14.933364 kernel: scsi host4: ahci Nov 8 00:23:14.939179 kernel: scsi host5: ahci Nov 8 00:23:14.939406 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 8 00:23:14.939418 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 8 00:23:14.939429 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 8 00:23:14.941259 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Nov 8 00:23:14.941281 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 8 00:23:14.941293 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 8 00:23:14.945749 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 8 00:23:14.948269 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:14.968320 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 00:23:15.035195 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:15.046479 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 00:23:15.050737 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 8 00:23:15.057673 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 00:23:15.067219 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:23:15.079067 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:23:15.081704 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:15.105435 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:15.235942 disk-uuid[555]: Primary Header is updated. Nov 8 00:23:15.235942 disk-uuid[555]: Secondary Entries is updated. Nov 8 00:23:15.235942 disk-uuid[555]: Secondary Header is updated. Nov 8 00:23:15.241932 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:23:15.247934 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:23:15.252930 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:23:15.252961 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:23:15.257942 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:23:15.257971 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 8 00:23:15.260148 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:23:15.263001 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 8 00:23:15.263106 kernel: ata3.00: applying bridge limits Nov 8 00:23:15.263935 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:23:15.265508 kernel: ata3.00: configured for UDMA/100 Nov 8 00:23:15.268028 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:23:15.322611 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 8 00:23:15.323078 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:23:15.338925 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:23:16.248030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:23:16.248380 disk-uuid[564]: The operation has completed successfully. Nov 8 00:23:16.278953 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:23:16.279110 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:23:16.321102 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:23:16.325212 sh[592]: Success Nov 8 00:23:16.341171 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:23:16.379892 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:23:16.411868 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:23:16.433450 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:23:16.575015 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:23:16.575079 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:16.575090 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:23:16.577873 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:23:16.577888 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:23:16.583974 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:23:16.588010 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:23:16.603054 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:23:16.607255 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:23:16.616466 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:16.616500 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:16.616511 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:23:16.620916 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:23:16.630414 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:23:16.633411 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:16.642978 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:23:16.653082 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:23:16.772935 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:23:16.798075 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:23:16.834529 systemd-networkd[773]: lo: Link UP Nov 8 00:23:16.834541 systemd-networkd[773]: lo: Gained carrier Nov 8 00:23:16.836462 systemd-networkd[773]: Enumeration completed Nov 8 00:23:16.837101 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:16.837105 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:23:16.838463 systemd-networkd[773]: eth0: Link UP Nov 8 00:23:16.838467 systemd-networkd[773]: eth0: Gained carrier Nov 8 00:23:16.838475 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:16.839714 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:23:16.858679 systemd[1]: Reached target network.target - Network. Nov 8 00:23:16.870995 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:23:17.183142 ignition[678]: Ignition 2.19.0 Nov 8 00:23:17.183163 ignition[678]: Stage: fetch-offline Nov 8 00:23:17.183235 ignition[678]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:17.183251 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:23:17.183377 ignition[678]: parsed url from cmdline: "" Nov 8 00:23:17.183382 ignition[678]: no config URL provided Nov 8 00:23:17.183387 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:23:17.183400 ignition[678]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:23:17.183446 ignition[678]: op(1): [started] loading QEMU firmware config module Nov 8 00:23:17.183452 ignition[678]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 8 00:23:17.201561 ignition[678]: op(1): [finished] loading QEMU firmware config module Nov 8 00:23:17.280696 ignition[678]: parsing config with SHA512: 1b192c2f215d125682db70c518b097f107d97a96de31699ad6c29c46fe47a398a75b082f4cc3f088f70f019a7fc7933009ee8e28e8949697b17a307ca848b635 Nov 8 00:23:17.285523 unknown[678]: fetched base config from "system" Nov 8 00:23:17.285538 unknown[678]: fetched user config from "qemu" Nov 8 00:23:17.285974 ignition[678]: fetch-offline: fetch-offline passed Nov 8 00:23:17.288912 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:23:17.286042 ignition[678]: Ignition finished successfully Nov 8 00:23:17.291975 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:23:17.300142 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:23:17.331495 ignition[784]: Ignition 2.19.0 Nov 8 00:23:17.331510 ignition[784]: Stage: kargs Nov 8 00:23:17.331763 ignition[784]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:17.331780 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:23:17.333134 ignition[784]: kargs: kargs passed Nov 8 00:23:17.338076 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:23:17.333200 ignition[784]: Ignition finished successfully Nov 8 00:23:17.347186 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:23:17.366789 ignition[792]: Ignition 2.19.0 Nov 8 00:23:17.366804 ignition[792]: Stage: disks Nov 8 00:23:17.367046 ignition[792]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:17.367062 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:23:17.370590 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:23:17.368235 ignition[792]: disks: disks passed Nov 8 00:23:17.374186 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:23:17.368299 ignition[792]: Ignition finished successfully Nov 8 00:23:17.377299 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:23:17.380413 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:23:17.383882 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:23:17.386832 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:23:17.403167 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:23:17.419490 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:23:17.426572 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:23:17.437124 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:23:17.541943 kernel: EXT4-fs (vda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:23:17.542510 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:23:17.544135 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:23:17.554101 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:23:17.557437 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:23:17.559002 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:23:17.567722 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Nov 8 00:23:17.559055 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:23:17.577640 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:17.577673 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:17.577688 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:23:17.577702 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:23:17.559083 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:23:17.602168 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:23:17.608995 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:23:17.622101 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:23:17.663080 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:23:17.669427 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:23:17.675333 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:23:17.679699 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:23:17.784440 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:23:17.794005 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:23:17.796347 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:23:17.803551 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:23:17.806153 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:17.824741 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:23:17.939617 ignition[928]: INFO : Ignition 2.19.0 Nov 8 00:23:17.939617 ignition[928]: INFO : Stage: mount Nov 8 00:23:17.942535 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:17.942535 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:23:17.942535 ignition[928]: INFO : mount: mount passed Nov 8 00:23:17.942535 ignition[928]: INFO : Ignition finished successfully Nov 8 00:23:17.942685 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:23:17.952999 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:23:17.961411 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:23:17.972921 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Nov 8 00:23:17.976233 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:17.976252 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:17.976262 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:23:17.979925 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:23:17.981041 systemd-networkd[773]: eth0: Gained IPv6LL Nov 8 00:23:17.982664 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:23:18.010080 ignition[955]: INFO : Ignition 2.19.0 Nov 8 00:23:18.010080 ignition[955]: INFO : Stage: files Nov 8 00:23:18.012697 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:18.012697 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:23:18.012697 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:23:18.012697 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:23:18.012697 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:23:18.023424 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:23:18.023424 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:23:18.023424 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:23:18.023424 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:23:18.023424 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:23:18.023424 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:23:18.023424 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:23:18.015284 unknown[955]: wrote ssh authorized keys file for user: core Nov 8 00:23:18.077434 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:23:18.177173 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:23:18.177173 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:23:18.184320 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:23:18.184320 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:23:18.184320 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:23:18.184320 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:23:18.184320 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:23:18.184320 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:23:18.184320 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:23:18.184320 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:23:18.184320 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:23:18.184320 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:23:18.184320 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:23:18.184320 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:23:18.184320 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:23:18.460609 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:23:18.915965 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:23:18.915965 ignition[955]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 8 00:23:18.921538 ignition[955]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:23:18.925415 ignition[955]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:23:18.925415 ignition[955]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 8 00:23:18.925415 ignition[955]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 8 00:23:18.932661 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:23:18.935600 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:23:18.935600 ignition[955]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 8 00:23:18.935600 ignition[955]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Nov 8 00:23:18.942350 ignition[955]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:23:18.945411 ignition[955]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:23:18.945411 ignition[955]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Nov 8 00:23:18.945411 ignition[955]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:23:18.969728 ignition[955]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:23:18.977941 ignition[955]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:23:18.980490 ignition[955]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:23:18.980490 ignition[955]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:23:18.980490 ignition[955]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:23:18.980490 ignition[955]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:23:18.980490 ignition[955]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:23:18.980490 ignition[955]: INFO : files: files passed Nov 8 00:23:18.980490 ignition[955]: INFO : Ignition finished successfully Nov 8 00:23:18.997517 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:23:19.008060 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:23:19.011664 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:23:19.015061 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:23:19.015225 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:23:19.031958 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Nov 8 00:23:19.037071 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:19.037071 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:19.041942 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:19.046862 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:23:19.047698 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:23:19.060125 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:23:19.087699 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:23:19.087917 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:23:19.089547 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:23:19.096014 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:23:19.096877 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:23:19.113081 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:23:19.132105 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:23:19.143103 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:23:19.153748 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:19.157411 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:19.161078 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:23:19.163929 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:23:19.165462 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:23:19.169375 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:23:19.172584 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:23:19.175420 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:23:19.178831 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:23:19.182514 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:23:19.186037 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:23:19.189291 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:23:19.193235 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:23:19.196577 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:23:19.199765 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:23:19.202338 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:23:19.203926 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:23:19.207453 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:19.210848 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:19.214569 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:23:19.216132 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:19.220276 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:23:19.221824 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:23:19.225278 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:23:19.226992 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:23:19.230820 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:23:19.233693 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:23:19.238066 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:19.242645 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:23:19.243507 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:23:19.244098 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:23:19.244254 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:23:19.249023 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:23:19.249154 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:23:19.251849 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:23:19.252053 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:23:19.254808 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:23:19.254988 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:23:19.269098 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:23:19.270563 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:23:19.272634 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:23:19.272808 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:19.275428 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:23:19.275589 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:23:19.283323 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:23:19.283443 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:23:19.305363 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:23:19.659669 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:23:19.659867 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:23:19.664118 ignition[1009]: INFO : Ignition 2.19.0 Nov 8 00:23:19.664118 ignition[1009]: INFO : Stage: umount Nov 8 00:23:19.666575 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:19.666575 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:23:19.670588 ignition[1009]: INFO : umount: umount passed Nov 8 00:23:19.671821 ignition[1009]: INFO : Ignition finished successfully Nov 8 00:23:19.675344 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:23:19.675539 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:23:19.676871 systemd[1]: Stopped target network.target - Network. Nov 8 00:23:19.682712 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:23:19.682847 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:23:19.683793 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:23:19.683915 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:23:19.688626 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:23:19.688698 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:23:19.689777 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:23:19.689865 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:23:19.694408 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:23:19.694513 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:23:19.697749 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:23:19.700455 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:23:19.712132 systemd-networkd[773]: eth0: DHCPv6 lease lost Nov 8 00:23:19.715162 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:23:19.715356 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:23:19.716953 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:23:19.717007 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:19.728071 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:23:19.731076 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:23:19.731144 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:23:19.737044 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:19.741093 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:23:19.742662 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:23:19.761371 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:23:19.761445 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:19.764479 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:23:19.764540 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:19.767652 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:23:19.767713 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:19.768871 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:23:19.769024 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:23:19.773530 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:23:19.773735 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:19.777527 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:23:19.777622 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:19.780359 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:23:19.780408 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:19.780860 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:23:19.780939 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:23:19.782428 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:23:19.782494 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:23:19.792844 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:23:19.792936 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:19.812026 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:23:19.812659 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:23:19.812723 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:19.816306 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:23:19.816371 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:23:19.819500 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:23:19.819560 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:19.823280 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:19.823336 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:19.862420 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:23:19.862586 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:23:19.864196 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:23:19.873035 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:23:19.882544 systemd[1]: Switching root. Nov 8 00:23:19.917027 systemd-journald[193]: Journal stopped Nov 8 00:23:21.134720 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 8 00:23:21.134788 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:23:21.134807 kernel: SELinux: policy capability open_perms=1 Nov 8 00:23:21.134818 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:23:21.134830 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:23:21.134841 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:23:21.134866 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:23:21.134918 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:23:21.134936 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:23:21.134948 kernel: audit: type=1403 audit(1762561400.310:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:23:21.134965 systemd[1]: Successfully loaded SELinux policy in 42.922ms. Nov 8 00:23:21.134993 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.110ms. Nov 8 00:23:21.135009 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:23:21.135021 systemd[1]: Detected virtualization kvm. Nov 8 00:23:21.135034 systemd[1]: Detected architecture x86-64. Nov 8 00:23:21.135052 systemd[1]: Detected first boot. Nov 8 00:23:21.135069 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:23:21.135081 zram_generator::config[1075]: No configuration found. Nov 8 00:23:21.135095 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:23:21.135107 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:23:21.135120 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 00:23:21.135133 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:23:21.135149 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:23:21.135163 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:23:21.135178 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:23:21.135191 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:23:21.135204 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:23:21.135217 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:23:21.135229 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:23:21.135241 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:21.135254 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:21.135266 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:23:21.135278 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:23:21.135293 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:23:21.135306 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:23:21.135319 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:23:21.135331 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:21.135344 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:23:21.135356 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:21.135372 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:23:21.135392 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:23:21.135410 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:23:21.135425 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:23:21.135441 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:23:21.135465 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:23:21.135480 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:23:21.135500 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:21.135516 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:21.135531 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:21.135547 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:23:21.135562 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:23:21.135581 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:23:21.135597 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:23:21.135612 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:21.135626 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:23:21.135638 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:23:21.135654 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:23:21.135666 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:23:21.135679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:21.135695 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:23:21.135707 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:23:21.135719 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:23:21.135733 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:23:21.135746 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:23:21.135758 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:23:21.135772 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:23:21.135785 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:23:21.135800 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 8 00:23:21.135814 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 8 00:23:21.135827 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:23:21.135839 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:23:21.135852 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:23:21.135864 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:23:21.140364 systemd-journald[1149]: Collecting audit messages is disabled. Nov 8 00:23:21.140433 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:23:21.140449 kernel: fuse: init (API version 7.39) Nov 8 00:23:21.140476 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:21.140497 systemd-journald[1149]: Journal started Nov 8 00:23:21.140646 systemd-journald[1149]: Runtime Journal (/run/log/journal/de8b385436a1451a84785741553dd6a4) is 6.0M, max 48.4M, 42.3M free. Nov 8 00:23:21.149379 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:23:21.151436 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:23:21.154547 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:23:21.156819 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:23:21.161590 kernel: loop: module loaded Nov 8 00:23:21.158734 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:23:21.163004 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:23:21.165151 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:23:21.167734 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:21.170874 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:23:21.171135 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:23:21.173812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:23:21.175239 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:23:21.177584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:23:21.177838 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:23:21.180571 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:23:21.180832 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:23:21.183064 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:23:21.183416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:23:21.186559 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:21.188933 kernel: ACPI: bus type drm_connector registered Nov 8 00:23:21.190168 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:23:21.193665 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:23:21.194051 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:23:21.196663 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:23:21.213496 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:23:21.222051 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:23:21.225608 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:23:21.227531 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:23:21.230446 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:23:21.236055 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:23:21.238162 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:23:21.240954 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:23:21.243034 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:23:21.246613 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:23:21.251166 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:23:21.257311 systemd-journald[1149]: Time spent on flushing to /var/log/journal/de8b385436a1451a84785741553dd6a4 is 23.459ms for 938 entries. Nov 8 00:23:21.257311 systemd-journald[1149]: System Journal (/var/log/journal/de8b385436a1451a84785741553dd6a4) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:23:21.315444 systemd-journald[1149]: Received client request to flush runtime journal. Nov 8 00:23:21.257850 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:23:21.261622 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:23:21.271700 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:23:21.274106 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:23:21.310239 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:21.318841 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:23:21.322728 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:23:21.332018 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Nov 8 00:23:21.332039 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Nov 8 00:23:21.339340 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:23:21.348171 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:23:21.350502 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:21.356188 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:23:21.375503 udevadm[1228]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:23:21.383919 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:23:21.394221 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:23:21.420094 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Nov 8 00:23:21.420115 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Nov 8 00:23:21.427523 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:21.951140 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:23:21.964397 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:21.992456 systemd-udevd[1237]: Using default interface naming scheme 'v255'. Nov 8 00:23:22.010937 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:22.026110 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:23:22.040125 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:23:22.051623 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 8 00:23:22.087172 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1242) Nov 8 00:23:22.123675 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:23:22.156559 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:23:22.346929 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:23:22.352933 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:23:22.367611 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:23:22.367950 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:23:22.368137 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:23:22.402925 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:23:22.408327 systemd-networkd[1245]: lo: Link UP Nov 8 00:23:22.408338 systemd-networkd[1245]: lo: Gained carrier Nov 8 00:23:22.410638 systemd-networkd[1245]: Enumeration completed Nov 8 00:23:22.410839 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:23:22.411234 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:22.411248 systemd-networkd[1245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:23:22.412408 systemd-networkd[1245]: eth0: Link UP Nov 8 00:23:22.412419 systemd-networkd[1245]: eth0: Gained carrier Nov 8 00:23:22.412431 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:22.419077 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:23:22.421931 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:23:22.434961 systemd-networkd[1245]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:23:22.449532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:22.504192 kernel: kvm_amd: TSC scaling supported Nov 8 00:23:22.504247 kernel: kvm_amd: Nested Virtualization enabled Nov 8 00:23:22.504267 kernel: kvm_amd: Nested Paging enabled Nov 8 00:23:22.506920 kernel: kvm_amd: LBR virtualization supported Nov 8 00:23:22.506954 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 8 00:23:22.506969 kernel: kvm_amd: Virtual GIF supported Nov 8 00:23:22.528933 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:23:22.568758 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:23:22.608187 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:22.619821 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:23:22.635892 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:23:22.676625 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:23:22.679012 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:22.694126 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:23:22.700956 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:23:22.736968 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:23:22.739261 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:23:22.741382 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:23:22.741433 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:23:22.743135 systemd[1]: Reached target machines.target - Containers. Nov 8 00:23:22.746572 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:23:22.769279 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:23:22.772771 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:23:22.824526 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:22.825587 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:23:22.828745 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:23:22.834753 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:23:22.838848 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:23:22.844720 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:23:22.847518 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:23:23.015926 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:23:23.125922 kernel: loop1: detected capacity change from 0 to 142488 Nov 8 00:23:23.164975 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:23:23.166453 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:23:23.174127 kernel: loop2: detected capacity change from 0 to 224512 Nov 8 00:23:23.214944 kernel: loop3: detected capacity change from 0 to 140768 Nov 8 00:23:23.292942 kernel: loop4: detected capacity change from 0 to 142488 Nov 8 00:23:23.306934 kernel: loop5: detected capacity change from 0 to 224512 Nov 8 00:23:23.311780 (sd-merge)[1307]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 8 00:23:23.312410 (sd-merge)[1307]: Merged extensions into '/usr'. Nov 8 00:23:23.318180 systemd[1]: Reloading requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:23:23.318194 systemd[1]: Reloading... Nov 8 00:23:23.381950 zram_generator::config[1338]: No configuration found. Nov 8 00:23:23.527176 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:23:23.600790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:23:23.700251 systemd[1]: Reloading finished in 381 ms. Nov 8 00:23:23.723884 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:23:23.726790 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:23:23.756242 systemd[1]: Starting ensure-sysext.service... Nov 8 00:23:23.759495 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:23:23.764773 systemd[1]: Reloading requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:23:23.764791 systemd[1]: Reloading... Nov 8 00:23:23.807800 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:23:23.808350 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:23:23.812211 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:23:23.812821 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Nov 8 00:23:23.813100 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Nov 8 00:23:23.818549 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:23:23.819505 systemd-tmpfiles[1381]: Skipping /boot Nov 8 00:23:23.836964 zram_generator::config[1412]: No configuration found. Nov 8 00:23:23.843170 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:23:23.843229 systemd-tmpfiles[1381]: Skipping /boot Nov 8 00:23:23.977878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:23:24.056747 systemd[1]: Reloading finished in 291 ms. Nov 8 00:23:24.061220 systemd-networkd[1245]: eth0: Gained IPv6LL Nov 8 00:23:24.098388 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:23:24.130846 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:24.156245 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:23:24.160536 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:23:24.164483 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:23:24.172955 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:23:24.180178 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:23:24.192289 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:24.192572 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:24.195303 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:23:24.202185 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:23:24.209547 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:23:24.217072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:24.217276 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:24.218723 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:23:24.228256 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:23:24.228561 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:23:24.232732 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:23:24.233018 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:23:24.235968 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:23:24.236191 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:23:24.243359 augenrules[1483]: No rules Nov 8 00:23:24.246856 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:23:24.266359 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:23:24.272332 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:23:24.279149 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:24.279397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:24.289220 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:23:24.294296 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:23:24.300191 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:23:24.309008 systemd-resolved[1466]: Positive Trust Anchors: Nov 8 00:23:24.309027 systemd-resolved[1466]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:23:24.309069 systemd-resolved[1466]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:23:24.309196 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:23:24.311158 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:24.313210 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:23:24.315233 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:23:24.315486 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:24.317151 systemd-resolved[1466]: Defaulting to hostname 'linux'. Nov 8 00:23:24.318209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:23:24.318463 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:23:24.320814 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:23:24.323232 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:23:24.323475 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:23:24.325982 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:23:24.326241 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:23:24.328792 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:23:24.329078 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:23:24.335848 systemd[1]: Finished ensure-sysext.service. Nov 8 00:23:24.338088 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:23:24.345713 systemd[1]: Reached target network.target - Network. Nov 8 00:23:24.347255 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:23:24.349074 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:24.351133 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:23:24.351234 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:23:24.365125 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:23:24.432552 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:23:24.434726 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:23:24.436540 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:23:25.263360 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:23:25.263389 systemd-resolved[1466]: Clock change detected. Flushing caches. Nov 8 00:23:25.265443 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:23:25.265476 systemd-timesyncd[1518]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 8 00:23:25.267479 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:23:25.267514 systemd-timesyncd[1518]: Initial clock synchronization to Sat 2025-11-08 00:23:25.263303 UTC. Nov 8 00:23:25.267515 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:23:25.268970 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:23:25.270897 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:23:25.272694 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:23:25.274677 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:23:25.277047 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:23:25.281015 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:23:25.283984 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:23:25.299283 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:23:25.301042 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:23:25.302579 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:23:25.304284 systemd[1]: System is tainted: cgroupsv1 Nov 8 00:23:25.304339 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:23:25.304364 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:23:25.305876 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:23:25.308925 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 8 00:23:25.312600 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:23:25.317168 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:23:25.323323 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:23:25.325549 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:23:25.327894 jq[1526]: false Nov 8 00:23:25.328421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:23:25.333565 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:23:25.340001 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:23:25.344545 dbus-daemon[1524]: [system] SELinux support is enabled Nov 8 00:23:25.347997 extend-filesystems[1528]: Found loop3 Nov 8 00:23:25.347997 extend-filesystems[1528]: Found loop4 Nov 8 00:23:25.347997 extend-filesystems[1528]: Found loop5 Nov 8 00:23:25.347997 extend-filesystems[1528]: Found sr0 Nov 8 00:23:25.347997 extend-filesystems[1528]: Found vda Nov 8 00:23:25.347997 extend-filesystems[1528]: Found vda1 Nov 8 00:23:25.347997 extend-filesystems[1528]: Found vda2 Nov 8 00:23:25.347997 extend-filesystems[1528]: Found vda3 Nov 8 00:23:25.347997 extend-filesystems[1528]: Found usr Nov 8 00:23:25.347997 extend-filesystems[1528]: Found vda4 Nov 8 00:23:25.347997 extend-filesystems[1528]: Found vda6 Nov 8 00:23:25.347997 extend-filesystems[1528]: Found vda7 Nov 8 00:23:25.347997 extend-filesystems[1528]: Found vda9 Nov 8 00:23:25.347997 extend-filesystems[1528]: Checking size of /dev/vda9 Nov 8 00:23:25.406627 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 8 00:23:25.406668 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1241) Nov 8 00:23:25.345853 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:23:25.406866 extend-filesystems[1528]: Resized partition /dev/vda9 Nov 8 00:23:25.411125 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 8 00:23:25.355629 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:23:25.434232 extend-filesystems[1552]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:23:25.377652 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:23:25.446059 extend-filesystems[1552]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 00:23:25.446059 extend-filesystems[1552]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 8 00:23:25.446059 extend-filesystems[1552]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 8 00:23:25.390808 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:23:25.463605 extend-filesystems[1528]: Resized filesystem in /dev/vda9 Nov 8 00:23:25.402558 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:23:25.411413 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:23:25.415605 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:23:25.419666 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:23:25.467291 update_engine[1561]: I20251108 00:23:25.452450 1561 main.cc:92] Flatcar Update Engine starting Nov 8 00:23:25.467291 update_engine[1561]: I20251108 00:23:25.459070 1561 update_check_scheduler.cc:74] Next update check in 7m8s Nov 8 00:23:25.426623 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:23:25.469844 jq[1562]: true Nov 8 00:23:25.427082 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:23:25.430989 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:23:25.431468 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:23:25.439513 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:23:25.443059 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:23:25.443676 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:23:25.450614 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:23:25.451047 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:23:25.482704 jq[1573]: true Nov 8 00:23:25.497493 (ntainerd)[1574]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:23:25.504796 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 8 00:23:25.505205 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 8 00:23:25.537597 tar[1571]: linux-amd64/LICENSE Nov 8 00:23:25.542447 tar[1571]: linux-amd64/helm Nov 8 00:23:25.544992 systemd-logind[1557]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:23:25.545981 systemd-logind[1557]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:23:25.546619 systemd-logind[1557]: New seat seat0. Nov 8 00:23:25.550888 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:23:25.555687 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:23:25.560649 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:23:25.560909 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:23:25.561075 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:23:25.564682 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:23:25.564923 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:23:25.568872 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:23:25.605474 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:23:25.625629 bash[1608]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:23:25.630112 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:23:25.636379 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:23:25.681567 locksmithd[1607]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:23:26.012132 sshd_keygen[1563]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:23:26.026458 containerd[1574]: time="2025-11-08T00:23:26.025896378Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:23:26.079451 containerd[1574]: time="2025-11-08T00:23:26.078626516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:26.081521 containerd[1574]: time="2025-11-08T00:23:26.081468095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:26.081521 containerd[1574]: time="2025-11-08T00:23:26.081517497Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:23:26.081615 containerd[1574]: time="2025-11-08T00:23:26.081538978Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:23:26.081831 containerd[1574]: time="2025-11-08T00:23:26.081803494Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:23:26.081862 containerd[1574]: time="2025-11-08T00:23:26.081831787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:26.081987 containerd[1574]: time="2025-11-08T00:23:26.081940811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:26.081987 containerd[1574]: time="2025-11-08T00:23:26.081984253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:26.082503 containerd[1574]: time="2025-11-08T00:23:26.082458582Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:26.082549 containerd[1574]: time="2025-11-08T00:23:26.082508396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:26.082549 containerd[1574]: time="2025-11-08T00:23:26.082539794Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:26.082587 containerd[1574]: time="2025-11-08T00:23:26.082551096Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:26.082758 containerd[1574]: time="2025-11-08T00:23:26.082730422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:26.083076 containerd[1574]: time="2025-11-08T00:23:26.083046304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:26.083314 containerd[1574]: time="2025-11-08T00:23:26.083279592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:26.083314 containerd[1574]: time="2025-11-08T00:23:26.083311011Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:23:26.084196 containerd[1574]: time="2025-11-08T00:23:26.083652942Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:23:26.084196 containerd[1574]: time="2025-11-08T00:23:26.083964877Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:23:26.091868 containerd[1574]: time="2025-11-08T00:23:26.090540018Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:23:26.091868 containerd[1574]: time="2025-11-08T00:23:26.090607464Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:23:26.091868 containerd[1574]: time="2025-11-08T00:23:26.090633633Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:23:26.091868 containerd[1574]: time="2025-11-08T00:23:26.090655344Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:23:26.091868 containerd[1574]: time="2025-11-08T00:23:26.090675382Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:23:26.091868 containerd[1574]: time="2025-11-08T00:23:26.090851352Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:23:26.091868 containerd[1574]: time="2025-11-08T00:23:26.091278913Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:23:26.091868 containerd[1574]: time="2025-11-08T00:23:26.091458911Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:23:26.091868 containerd[1574]: time="2025-11-08T00:23:26.091478327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:23:26.091868 containerd[1574]: time="2025-11-08T00:23:26.091491662Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:23:26.091868 containerd[1574]: time="2025-11-08T00:23:26.091504146Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:23:26.091868 containerd[1574]: time="2025-11-08T00:23:26.091524143Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:23:26.091868 containerd[1574]: time="2025-11-08T00:23:26.091539161Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:23:26.091868 containerd[1574]: time="2025-11-08T00:23:26.091553268Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:23:26.092264 containerd[1574]: time="2025-11-08T00:23:26.091570470Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:23:26.092264 containerd[1574]: time="2025-11-08T00:23:26.091582703Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:23:26.092264 containerd[1574]: time="2025-11-08T00:23:26.091599665Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:23:26.092264 containerd[1574]: time="2025-11-08T00:23:26.091611257Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:23:26.092264 containerd[1574]: time="2025-11-08T00:23:26.091642635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092264 containerd[1574]: time="2025-11-08T00:23:26.091656742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092264 containerd[1574]: time="2025-11-08T00:23:26.091668264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092264 containerd[1574]: time="2025-11-08T00:23:26.091683432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092264 containerd[1574]: time="2025-11-08T00:23:26.091697619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092264 containerd[1574]: time="2025-11-08T00:23:26.091716614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092264 containerd[1574]: time="2025-11-08T00:23:26.091743925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092264 containerd[1574]: time="2025-11-08T00:23:26.091757942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092264 containerd[1574]: time="2025-11-08T00:23:26.091771257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092264 containerd[1574]: time="2025-11-08T00:23:26.091784732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092603 containerd[1574]: time="2025-11-08T00:23:26.091797877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092603 containerd[1574]: time="2025-11-08T00:23:26.091815670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092603 containerd[1574]: time="2025-11-08T00:23:26.091829566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092603 containerd[1574]: time="2025-11-08T00:23:26.091847620Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:23:26.092603 containerd[1574]: time="2025-11-08T00:23:26.091876133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092603 containerd[1574]: time="2025-11-08T00:23:26.091889208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092603 containerd[1574]: time="2025-11-08T00:23:26.091899728Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:23:26.092603 containerd[1574]: time="2025-11-08T00:23:26.091987292Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:23:26.092603 containerd[1574]: time="2025-11-08T00:23:26.092009724Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:23:26.092603 containerd[1574]: time="2025-11-08T00:23:26.092032246Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:23:26.092603 containerd[1574]: time="2025-11-08T00:23:26.092058435Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:23:26.092603 containerd[1574]: time="2025-11-08T00:23:26.092069626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092603 containerd[1574]: time="2025-11-08T00:23:26.092087800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:23:26.092603 containerd[1574]: time="2025-11-08T00:23:26.092112046Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:23:26.092871 containerd[1574]: time="2025-11-08T00:23:26.092123026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:23:26.092896 containerd[1574]: time="2025-11-08T00:23:26.092559735Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:23:26.092896 containerd[1574]: time="2025-11-08T00:23:26.092747287Z" level=info msg="Connect containerd service" Nov 8 00:23:26.092896 containerd[1574]: time="2025-11-08T00:23:26.092816457Z" level=info msg="using legacy CRI server" Nov 8 00:23:26.092896 containerd[1574]: time="2025-11-08T00:23:26.092829231Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:23:26.093149 containerd[1574]: time="2025-11-08T00:23:26.093014488Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:23:26.094345 containerd[1574]: time="2025-11-08T00:23:26.094308445Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:23:26.095461 containerd[1574]: time="2025-11-08T00:23:26.094813432Z" level=info msg="Start subscribing containerd event" Nov 8 00:23:26.096439 containerd[1574]: time="2025-11-08T00:23:26.095753244Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:23:26.096439 containerd[1574]: time="2025-11-08T00:23:26.095836240Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:23:26.100980 containerd[1574]: time="2025-11-08T00:23:26.100928329Z" level=info msg="Start recovering state" Nov 8 00:23:26.101233 containerd[1574]: time="2025-11-08T00:23:26.101217482Z" level=info msg="Start event monitor" Nov 8 00:23:26.101315 containerd[1574]: time="2025-11-08T00:23:26.101302972Z" level=info msg="Start snapshots syncer" Nov 8 00:23:26.101463 containerd[1574]: time="2025-11-08T00:23:26.101447513Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:23:26.101525 containerd[1574]: time="2025-11-08T00:23:26.101512074Z" level=info msg="Start streaming server" Nov 8 00:23:26.101946 containerd[1574]: time="2025-11-08T00:23:26.101931160Z" level=info msg="containerd successfully booted in 0.077321s" Nov 8 00:23:26.244195 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:23:26.247132 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:23:26.263388 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:23:26.274730 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:23:26.275251 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:23:26.299400 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:23:26.320925 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:23:26.336731 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:23:26.340577 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:23:26.343108 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:23:26.721182 tar[1571]: linux-amd64/README.md Nov 8 00:23:26.742502 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:23:27.039643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:23:27.042138 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:23:27.044631 systemd[1]: Startup finished in 7.648s (kernel) + 5.950s (userspace) = 13.599s. Nov 8 00:23:27.045398 (kubelet)[1657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:23:27.852823 kubelet[1657]: E1108 00:23:27.852736 1657 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:23:27.857273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:23:27.857781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:23:27.865936 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:23:27.876880 systemd[1]: Started sshd@0-10.0.0.93:22-10.0.0.1:49228.service - OpenSSH per-connection server daemon (10.0.0.1:49228). Nov 8 00:23:27.914451 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 49228 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:27.916616 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:27.925777 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:23:27.934624 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:23:27.936628 systemd-logind[1557]: New session 1 of user core. Nov 8 00:23:27.954774 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:23:27.957651 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:23:27.996018 (systemd)[1675]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:23:28.142540 systemd[1675]: Queued start job for default target default.target. Nov 8 00:23:28.143083 systemd[1675]: Created slice app.slice - User Application Slice. Nov 8 00:23:28.143108 systemd[1675]: Reached target paths.target - Paths. Nov 8 00:23:28.143125 systemd[1675]: Reached target timers.target - Timers. Nov 8 00:23:28.161610 systemd[1675]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:23:28.168981 systemd[1675]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:23:28.169063 systemd[1675]: Reached target sockets.target - Sockets. Nov 8 00:23:28.169076 systemd[1675]: Reached target basic.target - Basic System. Nov 8 00:23:28.169128 systemd[1675]: Reached target default.target - Main User Target. Nov 8 00:23:28.169174 systemd[1675]: Startup finished in 162ms. Nov 8 00:23:28.169934 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:23:28.171671 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:23:28.232800 systemd[1]: Started sshd@1-10.0.0.93:22-10.0.0.1:49244.service - OpenSSH per-connection server daemon (10.0.0.1:49244). Nov 8 00:23:28.262984 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 49244 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:28.264794 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:28.269319 systemd-logind[1557]: New session 2 of user core. Nov 8 00:23:28.279113 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:23:28.338796 sshd[1688]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:28.352814 systemd[1]: Started sshd@2-10.0.0.93:22-10.0.0.1:49254.service - OpenSSH per-connection server daemon (10.0.0.1:49254). Nov 8 00:23:28.353979 systemd[1]: sshd@1-10.0.0.93:22-10.0.0.1:49244.service: Deactivated successfully. Nov 8 00:23:28.357207 systemd-logind[1557]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:23:28.358675 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:23:28.360248 systemd-logind[1557]: Removed session 2. Nov 8 00:23:28.384502 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 49254 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:28.386639 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:28.391134 systemd-logind[1557]: New session 3 of user core. Nov 8 00:23:28.400724 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:23:28.452677 sshd[1693]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:28.461710 systemd[1]: Started sshd@3-10.0.0.93:22-10.0.0.1:49264.service - OpenSSH per-connection server daemon (10.0.0.1:49264). Nov 8 00:23:28.462289 systemd[1]: sshd@2-10.0.0.93:22-10.0.0.1:49254.service: Deactivated successfully. Nov 8 00:23:28.465453 systemd-logind[1557]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:23:28.466607 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:23:28.467719 systemd-logind[1557]: Removed session 3. Nov 8 00:23:28.489833 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 49264 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:28.491377 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:28.495636 systemd-logind[1557]: New session 4 of user core. Nov 8 00:23:28.510678 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:23:28.566090 sshd[1701]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:28.574687 systemd[1]: Started sshd@4-10.0.0.93:22-10.0.0.1:49268.service - OpenSSH per-connection server daemon (10.0.0.1:49268). Nov 8 00:23:28.575246 systemd[1]: sshd@3-10.0.0.93:22-10.0.0.1:49264.service: Deactivated successfully. Nov 8 00:23:28.578285 systemd-logind[1557]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:23:28.579440 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:23:28.580592 systemd-logind[1557]: Removed session 4. Nov 8 00:23:28.601897 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 49268 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:28.603669 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:28.608040 systemd-logind[1557]: New session 5 of user core. Nov 8 00:23:28.617801 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:23:28.678281 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:23:28.678663 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:23:28.695823 sudo[1716]: pam_unix(sudo:session): session closed for user root Nov 8 00:23:28.698072 sshd[1709]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:28.709745 systemd[1]: Started sshd@5-10.0.0.93:22-10.0.0.1:49278.service - OpenSSH per-connection server daemon (10.0.0.1:49278). Nov 8 00:23:28.710409 systemd[1]: sshd@4-10.0.0.93:22-10.0.0.1:49268.service: Deactivated successfully. Nov 8 00:23:28.713784 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:23:28.714706 systemd-logind[1557]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:23:28.716758 systemd-logind[1557]: Removed session 5. Nov 8 00:23:28.736398 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 49278 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:28.737850 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:28.742053 systemd-logind[1557]: New session 6 of user core. Nov 8 00:23:28.752704 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:23:28.810481 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:23:28.810844 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:23:28.815025 sudo[1726]: pam_unix(sudo:session): session closed for user root Nov 8 00:23:28.821947 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:23:28.822305 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:23:28.844697 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:23:28.846797 auditctl[1729]: No rules Nov 8 00:23:28.848315 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:23:28.848767 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:23:28.851204 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:23:28.887457 augenrules[1748]: No rules Nov 8 00:23:28.889384 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:23:28.890806 sudo[1725]: pam_unix(sudo:session): session closed for user root Nov 8 00:23:28.892913 sshd[1718]: pam_unix(sshd:session): session closed for user core Nov 8 00:23:28.901665 systemd[1]: Started sshd@6-10.0.0.93:22-10.0.0.1:49280.service - OpenSSH per-connection server daemon (10.0.0.1:49280). Nov 8 00:23:28.902280 systemd[1]: sshd@5-10.0.0.93:22-10.0.0.1:49278.service: Deactivated successfully. Nov 8 00:23:28.904098 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:23:28.904834 systemd-logind[1557]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:23:28.906207 systemd-logind[1557]: Removed session 6. Nov 8 00:23:28.929501 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 49280 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:23:28.930999 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:23:28.935133 systemd-logind[1557]: New session 7 of user core. Nov 8 00:23:28.949121 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:23:29.008368 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:23:29.009104 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:23:29.850675 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:23:29.850955 (dockerd)[1779]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:23:30.465128 dockerd[1779]: time="2025-11-08T00:23:30.465049174Z" level=info msg="Starting up" Nov 8 00:23:31.627572 dockerd[1779]: time="2025-11-08T00:23:31.627457544Z" level=info msg="Loading containers: start." Nov 8 00:23:31.795464 kernel: Initializing XFRM netlink socket Nov 8 00:23:31.897787 systemd-networkd[1245]: docker0: Link UP Nov 8 00:23:31.919665 dockerd[1779]: time="2025-11-08T00:23:31.919588574Z" level=info msg="Loading containers: done." Nov 8 00:23:31.938973 dockerd[1779]: time="2025-11-08T00:23:31.938907053Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:23:31.939172 dockerd[1779]: time="2025-11-08T00:23:31.939035454Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:23:31.939172 dockerd[1779]: time="2025-11-08T00:23:31.939167652Z" level=info msg="Daemon has completed initialization" Nov 8 00:23:31.979504 dockerd[1779]: time="2025-11-08T00:23:31.979387649Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:23:31.979779 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:23:32.838358 containerd[1574]: time="2025-11-08T00:23:32.838283996Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:23:33.691093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1058501158.mount: Deactivated successfully. Nov 8 00:23:35.135619 containerd[1574]: time="2025-11-08T00:23:35.135546183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:35.136235 containerd[1574]: time="2025-11-08T00:23:35.136165013Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 8 00:23:35.137346 containerd[1574]: time="2025-11-08T00:23:35.137313778Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:35.140505 containerd[1574]: time="2025-11-08T00:23:35.140471790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:35.141845 containerd[1574]: time="2025-11-08T00:23:35.141766809Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.303394737s" Nov 8 00:23:35.141845 containerd[1574]: time="2025-11-08T00:23:35.141845847Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:23:35.143194 containerd[1574]: time="2025-11-08T00:23:35.143163969Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:23:36.948596 containerd[1574]: time="2025-11-08T00:23:36.948522761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:36.949366 containerd[1574]: time="2025-11-08T00:23:36.949298075Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 8 00:23:36.950890 containerd[1574]: time="2025-11-08T00:23:36.950839967Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:36.954354 containerd[1574]: time="2025-11-08T00:23:36.954295176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:36.955600 containerd[1574]: time="2025-11-08T00:23:36.955557464Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.812355183s" Nov 8 00:23:36.955664 containerd[1574]: time="2025-11-08T00:23:36.955604652Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:23:36.956302 containerd[1574]: time="2025-11-08T00:23:36.956269038Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:23:38.107753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:23:38.117800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:23:38.403262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:23:38.409869 (kubelet)[2001]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:23:38.713769 kubelet[2001]: E1108 00:23:38.713582 2001 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:23:38.723117 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:23:38.723414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:23:39.792978 containerd[1574]: time="2025-11-08T00:23:39.792917226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:39.794620 containerd[1574]: time="2025-11-08T00:23:39.794578722Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 8 00:23:39.795962 containerd[1574]: time="2025-11-08T00:23:39.795777891Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:39.799772 containerd[1574]: time="2025-11-08T00:23:39.798518811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:39.799851 containerd[1574]: time="2025-11-08T00:23:39.799786619Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.843474339s" Nov 8 00:23:39.799851 containerd[1574]: time="2025-11-08T00:23:39.799823838Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:23:39.800335 containerd[1574]: time="2025-11-08T00:23:39.800310321Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:23:41.203525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681974556.mount: Deactivated successfully. Nov 8 00:23:42.686368 containerd[1574]: time="2025-11-08T00:23:42.686300853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:42.687248 containerd[1574]: time="2025-11-08T00:23:42.687143383Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 8 00:23:42.688225 containerd[1574]: time="2025-11-08T00:23:42.688191028Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:42.690596 containerd[1574]: time="2025-11-08T00:23:42.690529934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:42.691128 containerd[1574]: time="2025-11-08T00:23:42.691075357Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.890733487s" Nov 8 00:23:42.691128 containerd[1574]: time="2025-11-08T00:23:42.691115703Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:23:42.691763 containerd[1574]: time="2025-11-08T00:23:42.691734563Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:23:43.678857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535305003.mount: Deactivated successfully. Nov 8 00:23:45.747610 containerd[1574]: time="2025-11-08T00:23:45.747540089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:45.765244 containerd[1574]: time="2025-11-08T00:23:45.764008163Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 8 00:23:45.799500 containerd[1574]: time="2025-11-08T00:23:45.799416286Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:45.806756 containerd[1574]: time="2025-11-08T00:23:45.806709403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:45.807930 containerd[1574]: time="2025-11-08T00:23:45.807871051Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.116097414s" Nov 8 00:23:45.807973 containerd[1574]: time="2025-11-08T00:23:45.807932076Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:23:45.808542 containerd[1574]: time="2025-11-08T00:23:45.808513466Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:23:47.174670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount940914714.mount: Deactivated successfully. Nov 8 00:23:47.180545 containerd[1574]: time="2025-11-08T00:23:47.180492697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:47.181224 containerd[1574]: time="2025-11-08T00:23:47.181151052Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:23:47.182284 containerd[1574]: time="2025-11-08T00:23:47.182244613Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:47.184503 containerd[1574]: time="2025-11-08T00:23:47.184460017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:47.185360 containerd[1574]: time="2025-11-08T00:23:47.185316744Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.376771558s" Nov 8 00:23:47.185360 containerd[1574]: time="2025-11-08T00:23:47.185351689Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:23:47.186126 containerd[1574]: time="2025-11-08T00:23:47.185860293Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:23:47.792611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2805006475.mount: Deactivated successfully. Nov 8 00:23:48.973844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:23:49.025734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:23:49.214368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:23:49.219784 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:23:49.542036 kubelet[2142]: E1108 00:23:49.541965 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:23:49.547213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:23:49.547553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:23:50.065048 containerd[1574]: time="2025-11-08T00:23:50.064991091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:50.065822 containerd[1574]: time="2025-11-08T00:23:50.065754403Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 8 00:23:50.067144 containerd[1574]: time="2025-11-08T00:23:50.067100457Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:50.072470 containerd[1574]: time="2025-11-08T00:23:50.072412459Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.886509065s" Nov 8 00:23:50.072547 containerd[1574]: time="2025-11-08T00:23:50.072472381Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:23:50.073233 containerd[1574]: time="2025-11-08T00:23:50.073195598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:23:52.376245 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:23:52.389642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:23:52.418073 systemd[1]: Reloading requested from client PID 2185 ('systemctl') (unit session-7.scope)... Nov 8 00:23:52.418090 systemd[1]: Reloading... Nov 8 00:23:52.499436 zram_generator::config[2224]: No configuration found. Nov 8 00:23:52.800003 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:23:52.880663 systemd[1]: Reloading finished in 462 ms. Nov 8 00:23:52.921396 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:23:52.921519 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:23:52.921903 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:23:52.923983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:23:53.103599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:23:53.109237 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:23:53.159970 kubelet[2284]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:23:53.159970 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:23:53.159970 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:23:53.160594 kubelet[2284]: I1108 00:23:53.160028 2284 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:23:53.370296 kubelet[2284]: I1108 00:23:53.370161 2284 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:23:53.370296 kubelet[2284]: I1108 00:23:53.370207 2284 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:23:53.371945 kubelet[2284]: I1108 00:23:53.370919 2284 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:23:53.474784 kubelet[2284]: E1108 00:23:53.474718 2284 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:23:53.476375 kubelet[2284]: I1108 00:23:53.476315 2284 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:23:53.482157 kubelet[2284]: E1108 00:23:53.482122 2284 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:23:53.482220 kubelet[2284]: I1108 00:23:53.482162 2284 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:23:53.491138 kubelet[2284]: I1108 00:23:53.491084 2284 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:23:53.491837 kubelet[2284]: I1108 00:23:53.491777 2284 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:23:53.492027 kubelet[2284]: I1108 00:23:53.491825 2284 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:23:53.492171 kubelet[2284]: I1108 00:23:53.492041 2284 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:23:53.492171 kubelet[2284]: I1108 00:23:53.492052 2284 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:23:53.492262 kubelet[2284]: I1108 00:23:53.492245 2284 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:23:53.498789 kubelet[2284]: I1108 00:23:53.498723 2284 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:23:53.500314 kubelet[2284]: I1108 00:23:53.500275 2284 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:23:53.500364 kubelet[2284]: I1108 00:23:53.500325 2284 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:23:53.500364 kubelet[2284]: I1108 00:23:53.500343 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:23:53.503840 kubelet[2284]: I1108 00:23:53.503811 2284 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:23:53.504201 kubelet[2284]: I1108 00:23:53.504173 2284 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:23:53.505189 kubelet[2284]: W1108 00:23:53.504850 2284 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:23:53.508447 kubelet[2284]: W1108 00:23:53.506281 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Nov 8 00:23:53.508447 kubelet[2284]: W1108 00:23:53.506336 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Nov 8 00:23:53.508447 kubelet[2284]: E1108 00:23:53.506404 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:23:53.508447 kubelet[2284]: E1108 00:23:53.506360 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:23:53.508447 kubelet[2284]: I1108 00:23:53.507669 2284 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:23:53.508447 kubelet[2284]: I1108 00:23:53.507717 2284 server.go:1287] "Started kubelet" Nov 8 00:23:53.509853 kubelet[2284]: I1108 00:23:53.509759 2284 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:23:53.509998 kubelet[2284]: I1108 00:23:53.509977 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:23:53.510138 kubelet[2284]: I1108 00:23:53.510119 2284 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:23:53.510250 kubelet[2284]: I1108 00:23:53.510218 2284 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:23:53.511206 kubelet[2284]: I1108 00:23:53.511032 2284 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:23:53.512375 kubelet[2284]: I1108 00:23:53.512309 2284 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:23:53.514065 kubelet[2284]: E1108 00:23:53.513884 2284 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:23:53.514065 kubelet[2284]: I1108 00:23:53.513949 2284 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:23:53.514156 kubelet[2284]: I1108 00:23:53.514121 2284 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:23:53.514186 kubelet[2284]: I1108 00:23:53.514173 2284 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:23:53.515852 kubelet[2284]: E1108 00:23:53.514456 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="200ms" Nov 8 00:23:53.515852 kubelet[2284]: W1108 00:23:53.514532 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Nov 8 00:23:53.515852 kubelet[2284]: E1108 00:23:53.514581 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:23:53.515852 kubelet[2284]: E1108 00:23:53.514697 2284 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:23:53.515852 kubelet[2284]: I1108 00:23:53.514808 2284 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:23:53.516140 kubelet[2284]: I1108 00:23:53.516099 2284 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:23:53.516140 kubelet[2284]: I1108 00:23:53.516116 2284 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:23:53.516819 kubelet[2284]: E1108 00:23:53.515167 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.93:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.93:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875e0490b076f4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:23:53.507688267 +0000 UTC m=+0.393816106,LastTimestamp:2025-11-08 00:23:53.507688267 +0000 UTC m=+0.393816106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:23:53.549840 kubelet[2284]: I1108 00:23:53.549784 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:23:53.552444 kubelet[2284]: I1108 00:23:53.552384 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:23:53.552548 kubelet[2284]: I1108 00:23:53.552469 2284 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:23:53.552548 kubelet[2284]: I1108 00:23:53.552508 2284 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:23:53.552548 kubelet[2284]: I1108 00:23:53.552519 2284 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:23:53.552704 kubelet[2284]: E1108 00:23:53.552589 2284 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:23:53.555372 kubelet[2284]: W1108 00:23:53.555232 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Nov 8 00:23:53.555372 kubelet[2284]: E1108 00:23:53.555306 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:23:53.562957 kubelet[2284]: I1108 00:23:53.562932 2284 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:23:53.562957 kubelet[2284]: I1108 00:23:53.562953 2284 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:23:53.563037 kubelet[2284]: I1108 00:23:53.562978 2284 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:23:53.614307 kubelet[2284]: E1108 00:23:53.614261 2284 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:23:53.653624 kubelet[2284]: E1108 00:23:53.653466 2284 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:23:53.714808 kubelet[2284]: E1108 00:23:53.714736 2284 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:23:53.715338 kubelet[2284]: E1108 00:23:53.715290 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="400ms" Nov 8 00:23:53.815737 kubelet[2284]: E1108 00:23:53.815686 2284 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:23:53.854142 kubelet[2284]: E1108 00:23:53.854088 2284 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:23:53.916569 kubelet[2284]: E1108 00:23:53.916381 2284 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:23:54.017511 kubelet[2284]: E1108 00:23:54.017449 2284 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:23:54.029650 kubelet[2284]: I1108 00:23:54.029593 2284 policy_none.go:49] "None policy: Start" Nov 8 00:23:54.029650 kubelet[2284]: I1108 00:23:54.029662 2284 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:23:54.029841 kubelet[2284]: I1108 00:23:54.029690 2284 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:23:54.062233 kubelet[2284]: I1108 00:23:54.062195 2284 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:23:54.062508 kubelet[2284]: I1108 00:23:54.062488 2284 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:23:54.062557 kubelet[2284]: I1108 00:23:54.062513 2284 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:23:54.063692 kubelet[2284]: I1108 00:23:54.063668 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:23:54.064549 kubelet[2284]: E1108 00:23:54.064527 2284 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:23:54.064609 kubelet[2284]: E1108 00:23:54.064597 2284 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 8 00:23:54.116494 kubelet[2284]: E1108 00:23:54.116417 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="800ms" Nov 8 00:23:54.164223 kubelet[2284]: I1108 00:23:54.164154 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:23:54.164773 kubelet[2284]: E1108 00:23:54.164717 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Nov 8 00:23:54.261531 kubelet[2284]: E1108 00:23:54.261397 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:23:54.265690 kubelet[2284]: E1108 00:23:54.265660 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:23:54.268080 kubelet[2284]: E1108 00:23:54.268031 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:23:54.319286 kubelet[2284]: I1108 00:23:54.319208 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fedbdd6127f02c0d922d225f8983f314-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fedbdd6127f02c0d922d225f8983f314\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:23:54.319286 kubelet[2284]: I1108 00:23:54.319261 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:23:54.319286 kubelet[2284]: I1108 00:23:54.319294 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:23:54.319565 kubelet[2284]: I1108 00:23:54.319321 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:23:54.319565 kubelet[2284]: I1108 00:23:54.319348 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fedbdd6127f02c0d922d225f8983f314-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fedbdd6127f02c0d922d225f8983f314\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:23:54.319565 kubelet[2284]: I1108 00:23:54.319392 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fedbdd6127f02c0d922d225f8983f314-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fedbdd6127f02c0d922d225f8983f314\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:23:54.319565 kubelet[2284]: I1108 00:23:54.319414 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:23:54.319565 kubelet[2284]: I1108 00:23:54.319450 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:23:54.319703 kubelet[2284]: I1108 00:23:54.319478 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:23:54.367154 kubelet[2284]: I1108 00:23:54.367104 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:23:54.367543 kubelet[2284]: E1108 00:23:54.367509 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Nov 8 00:23:54.510722 kubelet[2284]: W1108 00:23:54.510652 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Nov 8 00:23:54.510722 kubelet[2284]: E1108 00:23:54.510719 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:23:54.562384 kubelet[2284]: E1108 00:23:54.562229 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:54.563178 containerd[1574]: time="2025-11-08T00:23:54.563117985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fedbdd6127f02c0d922d225f8983f314,Namespace:kube-system,Attempt:0,}" Nov 8 00:23:54.566375 kubelet[2284]: E1108 00:23:54.566326 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:54.566865 containerd[1574]: time="2025-11-08T00:23:54.566832551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 8 00:23:54.569448 kubelet[2284]: E1108 00:23:54.569229 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:54.569669 containerd[1574]: time="2025-11-08T00:23:54.569633293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 8 00:23:54.643670 kubelet[2284]: W1108 00:23:54.643606 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Nov 8 00:23:54.643852 kubelet[2284]: E1108 00:23:54.643676 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:23:54.770059 kubelet[2284]: I1108 00:23:54.770015 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:23:54.770495 kubelet[2284]: E1108 00:23:54.770451 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Nov 8 00:23:54.786285 kubelet[2284]: W1108 00:23:54.786212 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Nov 8 00:23:54.786344 kubelet[2284]: E1108 00:23:54.786282 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:23:54.917895 kubelet[2284]: E1108 00:23:54.917854 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="1.6s" Nov 8 00:23:54.970982 kubelet[2284]: E1108 00:23:54.970831 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.93:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.93:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875e0490b076f4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:23:53.507688267 +0000 UTC m=+0.393816106,LastTimestamp:2025-11-08 00:23:53.507688267 +0000 UTC m=+0.393816106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:23:55.003714 kubelet[2284]: W1108 00:23:55.003653 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Nov 8 00:23:55.003812 kubelet[2284]: E1108 00:23:55.003723 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:23:55.095715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550036324.mount: Deactivated successfully. Nov 8 00:23:55.100866 containerd[1574]: time="2025-11-08T00:23:55.100812754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:23:55.102537 containerd[1574]: time="2025-11-08T00:23:55.102485231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:23:55.103487 containerd[1574]: time="2025-11-08T00:23:55.103411889Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:23:55.104343 containerd[1574]: time="2025-11-08T00:23:55.104308871Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:23:55.105329 containerd[1574]: time="2025-11-08T00:23:55.105273931Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:23:55.106278 containerd[1574]: time="2025-11-08T00:23:55.106229242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:23:55.107136 containerd[1574]: time="2025-11-08T00:23:55.107092291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:23:55.108594 containerd[1574]: time="2025-11-08T00:23:55.108560254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:23:55.111069 containerd[1574]: time="2025-11-08T00:23:55.111026940Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.135529ms" Nov 8 00:23:55.111818 containerd[1574]: time="2025-11-08T00:23:55.111747451Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 548.523918ms" Nov 8 00:23:55.114466 containerd[1574]: time="2025-11-08T00:23:55.114390978Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.67505ms" Nov 8 00:23:55.528146 containerd[1574]: time="2025-11-08T00:23:55.528013238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:23:55.528440 containerd[1574]: time="2025-11-08T00:23:55.528324261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:23:55.528531 containerd[1574]: time="2025-11-08T00:23:55.528406896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:23:55.529125 containerd[1574]: time="2025-11-08T00:23:55.529013824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:23:55.535332 containerd[1574]: time="2025-11-08T00:23:55.535183805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:23:55.535332 containerd[1574]: time="2025-11-08T00:23:55.535296016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:23:55.535332 containerd[1574]: time="2025-11-08T00:23:55.535312356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:23:55.535994 containerd[1574]: time="2025-11-08T00:23:55.535945864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:23:55.539521 containerd[1574]: time="2025-11-08T00:23:55.539189758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:23:55.539521 containerd[1574]: time="2025-11-08T00:23:55.539231466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:23:55.539521 containerd[1574]: time="2025-11-08T00:23:55.539243338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:23:55.539521 containerd[1574]: time="2025-11-08T00:23:55.539335792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:23:55.549064 kubelet[2284]: E1108 00:23:55.549023 2284 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:23:55.635472 kubelet[2284]: I1108 00:23:55.633218 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:23:55.635472 kubelet[2284]: E1108 00:23:55.633617 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Nov 8 00:23:55.695346 containerd[1574]: time="2025-11-08T00:23:55.695302745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfa4a18b706e5ff9784e8e710174c36d113a72cedfa7e68fb302d6ca71778f7d\"" Nov 8 00:23:55.697220 kubelet[2284]: E1108 00:23:55.697189 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:55.700448 containerd[1574]: time="2025-11-08T00:23:55.700386750Z" level=info msg="CreateContainer within sandbox \"cfa4a18b706e5ff9784e8e710174c36d113a72cedfa7e68fb302d6ca71778f7d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:23:55.708108 containerd[1574]: time="2025-11-08T00:23:55.708073004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fedbdd6127f02c0d922d225f8983f314,Namespace:kube-system,Attempt:0,} returns sandbox id \"2549c6505c024f24ed41508f50e871fc96d051c08037c6b48658868a093fece9\"" Nov 8 00:23:55.709943 kubelet[2284]: E1108 00:23:55.709919 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:55.711599 containerd[1574]: time="2025-11-08T00:23:55.711497717Z" level=info msg="CreateContainer within sandbox \"2549c6505c024f24ed41508f50e871fc96d051c08037c6b48658868a093fece9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:23:55.716054 containerd[1574]: time="2025-11-08T00:23:55.716033252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"68a3156a4139f6d9d02d6eb24439ca3fec0ba8b5e598175f235512c0d29bf2d6\"" Nov 8 00:23:55.716655 kubelet[2284]: E1108 00:23:55.716630 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:55.718146 containerd[1574]: time="2025-11-08T00:23:55.718106200Z" level=info msg="CreateContainer within sandbox \"68a3156a4139f6d9d02d6eb24439ca3fec0ba8b5e598175f235512c0d29bf2d6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:23:56.348485 containerd[1574]: time="2025-11-08T00:23:56.348398994Z" level=info msg="CreateContainer within sandbox \"2549c6505c024f24ed41508f50e871fc96d051c08037c6b48658868a093fece9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fa223826cbd8e3ae22fc8cf6640f48c75f60a9df7ca864758d223d65d8656046\"" Nov 8 00:23:56.349207 containerd[1574]: time="2025-11-08T00:23:56.349179978Z" level=info msg="StartContainer for \"fa223826cbd8e3ae22fc8cf6640f48c75f60a9df7ca864758d223d65d8656046\"" Nov 8 00:23:56.351442 containerd[1574]: time="2025-11-08T00:23:56.351399160Z" level=info msg="CreateContainer within sandbox \"cfa4a18b706e5ff9784e8e710174c36d113a72cedfa7e68fb302d6ca71778f7d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"174cc6ce67181b871f578f09ffb86b4b9bc67e73ab474736c5255e7d916bb0cb\"" Nov 8 00:23:56.351805 containerd[1574]: time="2025-11-08T00:23:56.351778732Z" level=info msg="StartContainer for \"174cc6ce67181b871f578f09ffb86b4b9bc67e73ab474736c5255e7d916bb0cb\"" Nov 8 00:23:56.353513 containerd[1574]: time="2025-11-08T00:23:56.353483940Z" level=info msg="CreateContainer within sandbox \"68a3156a4139f6d9d02d6eb24439ca3fec0ba8b5e598175f235512c0d29bf2d6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1d90592391a9ad4adccc02ef5b91a98fea5c63afbf66d1e58bab9bf7f2e8822c\"" Nov 8 00:23:56.353887 containerd[1574]: time="2025-11-08T00:23:56.353864393Z" level=info msg="StartContainer for \"1d90592391a9ad4adccc02ef5b91a98fea5c63afbf66d1e58bab9bf7f2e8822c\"" Nov 8 00:23:56.441052 kubelet[2284]: W1108 00:23:56.440864 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Nov 8 00:23:56.441052 kubelet[2284]: E1108 00:23:56.440933 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:23:56.483120 containerd[1574]: time="2025-11-08T00:23:56.482989379Z" level=info msg="StartContainer for \"fa223826cbd8e3ae22fc8cf6640f48c75f60a9df7ca864758d223d65d8656046\" returns successfully" Nov 8 00:23:56.501085 containerd[1574]: time="2025-11-08T00:23:56.501025924Z" level=info msg="StartContainer for \"1d90592391a9ad4adccc02ef5b91a98fea5c63afbf66d1e58bab9bf7f2e8822c\" returns successfully" Nov 8 00:23:56.518881 kubelet[2284]: E1108 00:23:56.518826 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="3.2s" Nov 8 00:23:56.520829 containerd[1574]: time="2025-11-08T00:23:56.520780100Z" level=info msg="StartContainer for \"174cc6ce67181b871f578f09ffb86b4b9bc67e73ab474736c5255e7d916bb0cb\" returns successfully" Nov 8 00:23:56.569718 kubelet[2284]: E1108 00:23:56.569363 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:23:56.569718 kubelet[2284]: E1108 00:23:56.569572 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:56.572121 kubelet[2284]: E1108 00:23:56.571780 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:23:56.572121 kubelet[2284]: E1108 00:23:56.571906 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:56.577284 kubelet[2284]: E1108 00:23:56.577092 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:23:56.577284 kubelet[2284]: E1108 00:23:56.577222 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:57.235619 kubelet[2284]: I1108 00:23:57.235566 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:23:57.705269 kubelet[2284]: E1108 00:23:57.578982 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:23:57.705269 kubelet[2284]: E1108 00:23:57.579107 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:57.705269 kubelet[2284]: E1108 00:23:57.579325 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:23:57.705269 kubelet[2284]: E1108 00:23:57.579398 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:57.705269 kubelet[2284]: E1108 00:23:57.582044 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:23:57.705269 kubelet[2284]: E1108 00:23:57.582134 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:23:57.921379 kubelet[2284]: I1108 00:23:57.920441 2284 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:23:58.014492 kubelet[2284]: I1108 00:23:58.014334 2284 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:23:58.019704 kubelet[2284]: E1108 00:23:58.019649 2284 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:23:58.019704 kubelet[2284]: I1108 00:23:58.019693 2284 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:23:58.020986 kubelet[2284]: E1108 00:23:58.020953 2284 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 8 00:23:58.020986 kubelet[2284]: I1108 00:23:58.020980 2284 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:23:58.022213 kubelet[2284]: E1108 00:23:58.022183 2284 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 8 00:23:58.505613 kubelet[2284]: I1108 00:23:58.505536 2284 apiserver.go:52] "Watching apiserver" Nov 8 00:23:58.515264 kubelet[2284]: I1108 00:23:58.515202 2284 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:24:00.056502 systemd[1]: Reloading requested from client PID 2565 ('systemctl') (unit session-7.scope)... Nov 8 00:24:00.056517 systemd[1]: Reloading... Nov 8 00:24:00.136469 zram_generator::config[2604]: No configuration found. Nov 8 00:24:00.256831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:24:00.355606 systemd[1]: Reloading finished in 298 ms. Nov 8 00:24:00.394197 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:00.417137 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:24:00.417685 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:00.424955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:00.601499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:00.610988 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:24:00.664818 kubelet[2659]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:24:00.664818 kubelet[2659]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:24:00.664818 kubelet[2659]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:24:00.665289 kubelet[2659]: I1108 00:24:00.664889 2659 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:24:00.672750 kubelet[2659]: I1108 00:24:00.672715 2659 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:24:00.672750 kubelet[2659]: I1108 00:24:00.672739 2659 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:24:00.672963 kubelet[2659]: I1108 00:24:00.672943 2659 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:24:00.674183 kubelet[2659]: I1108 00:24:00.674160 2659 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:24:00.676639 kubelet[2659]: I1108 00:24:00.676595 2659 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:24:00.679179 kubelet[2659]: E1108 00:24:00.679152 2659 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:24:00.679179 kubelet[2659]: I1108 00:24:00.679178 2659 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:24:00.685129 kubelet[2659]: I1108 00:24:00.685087 2659 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:24:00.685818 kubelet[2659]: I1108 00:24:00.685777 2659 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:24:00.685972 kubelet[2659]: I1108 00:24:00.685812 2659 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:24:00.686066 kubelet[2659]: I1108 00:24:00.685976 2659 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:24:00.686066 kubelet[2659]: I1108 00:24:00.685987 2659 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:24:00.686066 kubelet[2659]: I1108 00:24:00.686036 2659 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:24:00.686218 kubelet[2659]: I1108 00:24:00.686196 2659 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:24:00.686243 kubelet[2659]: I1108 00:24:00.686224 2659 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:24:00.686264 kubelet[2659]: I1108 00:24:00.686245 2659 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:24:00.686264 kubelet[2659]: I1108 00:24:00.686257 2659 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:24:00.688866 kubelet[2659]: I1108 00:24:00.688838 2659 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:24:00.690572 kubelet[2659]: I1108 00:24:00.689273 2659 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:24:00.690572 kubelet[2659]: I1108 00:24:00.690122 2659 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:24:00.690572 kubelet[2659]: I1108 00:24:00.690189 2659 server.go:1287] "Started kubelet" Nov 8 00:24:00.691161 kubelet[2659]: I1108 00:24:00.691115 2659 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:24:00.693163 kubelet[2659]: I1108 00:24:00.691466 2659 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:24:00.693163 kubelet[2659]: I1108 00:24:00.691517 2659 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:24:00.693163 kubelet[2659]: I1108 00:24:00.692314 2659 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:24:00.694199 kubelet[2659]: I1108 00:24:00.694174 2659 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:24:00.696256 kubelet[2659]: E1108 00:24:00.696225 2659 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:24:00.698638 kubelet[2659]: E1108 00:24:00.698602 2659 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:24:00.698693 kubelet[2659]: I1108 00:24:00.698643 2659 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:24:00.698693 kubelet[2659]: I1108 00:24:00.698667 2659 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:24:00.698797 kubelet[2659]: I1108 00:24:00.698781 2659 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:24:00.699770 kubelet[2659]: I1108 00:24:00.698898 2659 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:24:00.707631 kubelet[2659]: I1108 00:24:00.706976 2659 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:24:00.707631 kubelet[2659]: I1108 00:24:00.706995 2659 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:24:00.707631 kubelet[2659]: I1108 00:24:00.707085 2659 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:24:00.718380 kubelet[2659]: I1108 00:24:00.718341 2659 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:24:00.720094 kubelet[2659]: I1108 00:24:00.719716 2659 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:24:00.720094 kubelet[2659]: I1108 00:24:00.719745 2659 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:24:00.720094 kubelet[2659]: I1108 00:24:00.719770 2659 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:24:00.720094 kubelet[2659]: I1108 00:24:00.719778 2659 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:24:00.720094 kubelet[2659]: E1108 00:24:00.719842 2659 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:24:00.759096 kubelet[2659]: I1108 00:24:00.759059 2659 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:24:00.759096 kubelet[2659]: I1108 00:24:00.759084 2659 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:24:00.759274 kubelet[2659]: I1108 00:24:00.759111 2659 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:24:00.759320 kubelet[2659]: I1108 00:24:00.759302 2659 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:24:00.759345 kubelet[2659]: I1108 00:24:00.759318 2659 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:24:00.759345 kubelet[2659]: I1108 00:24:00.759340 2659 policy_none.go:49] "None policy: Start" Nov 8 00:24:00.759388 kubelet[2659]: I1108 00:24:00.759352 2659 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:24:00.759388 kubelet[2659]: I1108 00:24:00.759366 2659 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:24:00.759524 kubelet[2659]: I1108 00:24:00.759508 2659 state_mem.go:75] "Updated machine memory state" Nov 8 00:24:00.762465 kubelet[2659]: I1108 00:24:00.761314 2659 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:24:00.762465 kubelet[2659]: I1108 00:24:00.761559 2659 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:24:00.762465 kubelet[2659]: I1108 00:24:00.761572 2659 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:24:00.762465 kubelet[2659]: I1108 00:24:00.761783 2659 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:24:00.763138 kubelet[2659]: E1108 00:24:00.763116 2659 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:24:00.821250 kubelet[2659]: I1108 00:24:00.821188 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:24:00.821497 kubelet[2659]: I1108 00:24:00.821294 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:24:00.821497 kubelet[2659]: I1108 00:24:00.821359 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:24:00.866588 kubelet[2659]: I1108 00:24:00.866480 2659 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:24:00.873432 kubelet[2659]: I1108 00:24:00.873401 2659 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 8 00:24:00.873522 kubelet[2659]: I1108 00:24:00.873502 2659 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:24:01.000546 kubelet[2659]: I1108 00:24:01.000485 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fedbdd6127f02c0d922d225f8983f314-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fedbdd6127f02c0d922d225f8983f314\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:24:01.000546 kubelet[2659]: I1108 00:24:01.000520 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fedbdd6127f02c0d922d225f8983f314-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fedbdd6127f02c0d922d225f8983f314\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:24:01.000546 kubelet[2659]: I1108 00:24:01.000541 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:24:01.000546 kubelet[2659]: I1108 00:24:01.000559 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:24:01.000820 kubelet[2659]: I1108 00:24:01.000585 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:24:01.000820 kubelet[2659]: I1108 00:24:01.000603 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fedbdd6127f02c0d922d225f8983f314-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fedbdd6127f02c0d922d225f8983f314\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:24:01.000820 kubelet[2659]: I1108 00:24:01.000619 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:24:01.000820 kubelet[2659]: I1108 00:24:01.000637 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:24:01.000820 kubelet[2659]: I1108 00:24:01.000656 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:24:01.127066 kubelet[2659]: E1108 00:24:01.126860 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:01.129124 kubelet[2659]: E1108 00:24:01.129054 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:01.129290 kubelet[2659]: E1108 00:24:01.129267 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:01.686777 kubelet[2659]: I1108 00:24:01.686730 2659 apiserver.go:52] "Watching apiserver" Nov 8 00:24:01.699224 kubelet[2659]: I1108 00:24:01.699183 2659 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:24:01.731704 kubelet[2659]: I1108 00:24:01.731639 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:24:01.731704 kubelet[2659]: I1108 00:24:01.731664 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:24:01.731704 kubelet[2659]: E1108 00:24:01.731678 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:01.902536 kubelet[2659]: E1108 00:24:01.902056 2659 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:24:01.902536 kubelet[2659]: E1108 00:24:01.902269 2659 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:24:01.902536 kubelet[2659]: E1108 00:24:01.902359 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:01.902714 kubelet[2659]: E1108 00:24:01.902584 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:01.921439 kubelet[2659]: I1108 00:24:01.921338 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.921312394 podStartE2EDuration="1.921312394s" podCreationTimestamp="2025-11-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:24:01.920878115 +0000 UTC m=+1.299998109" watchObservedRunningTime="2025-11-08 00:24:01.921312394 +0000 UTC m=+1.300432388" Nov 8 00:24:01.941874 kubelet[2659]: I1108 00:24:01.941678 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.941651397 podStartE2EDuration="1.941651397s" podCreationTimestamp="2025-11-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:24:01.928050622 +0000 UTC m=+1.307170617" watchObservedRunningTime="2025-11-08 00:24:01.941651397 +0000 UTC m=+1.320771391" Nov 8 00:24:01.956454 kubelet[2659]: I1108 00:24:01.956010 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.955966209 podStartE2EDuration="1.955966209s" podCreationTimestamp="2025-11-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:24:01.941951385 +0000 UTC m=+1.321071378" watchObservedRunningTime="2025-11-08 00:24:01.955966209 +0000 UTC m=+1.335086203" Nov 8 00:24:02.734010 kubelet[2659]: E1108 00:24:02.733962 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:02.734747 kubelet[2659]: E1108 00:24:02.734724 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:02.734972 kubelet[2659]: E1108 00:24:02.734948 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:06.822831 kubelet[2659]: I1108 00:24:06.822791 2659 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:24:06.823357 containerd[1574]: time="2025-11-08T00:24:06.823236273Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:24:06.823700 kubelet[2659]: I1108 00:24:06.823472 2659 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:24:07.741221 kubelet[2659]: I1108 00:24:07.741183 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm9tv\" (UniqueName: \"kubernetes.io/projected/c8ead742-59e0-41b0-a421-58f9f9cef9fb-kube-api-access-gm9tv\") pod \"kube-proxy-2mtct\" (UID: \"c8ead742-59e0-41b0-a421-58f9f9cef9fb\") " pod="kube-system/kube-proxy-2mtct" Nov 8 00:24:07.741362 kubelet[2659]: I1108 00:24:07.741220 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c8ead742-59e0-41b0-a421-58f9f9cef9fb-kube-proxy\") pod \"kube-proxy-2mtct\" (UID: \"c8ead742-59e0-41b0-a421-58f9f9cef9fb\") " pod="kube-system/kube-proxy-2mtct" Nov 8 00:24:07.741362 kubelet[2659]: I1108 00:24:07.741274 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8ead742-59e0-41b0-a421-58f9f9cef9fb-xtables-lock\") pod \"kube-proxy-2mtct\" (UID: \"c8ead742-59e0-41b0-a421-58f9f9cef9fb\") " pod="kube-system/kube-proxy-2mtct" Nov 8 00:24:07.741362 kubelet[2659]: I1108 00:24:07.741291 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8ead742-59e0-41b0-a421-58f9f9cef9fb-lib-modules\") pod \"kube-proxy-2mtct\" (UID: \"c8ead742-59e0-41b0-a421-58f9f9cef9fb\") " pod="kube-system/kube-proxy-2mtct" Nov 8 00:24:07.841597 kubelet[2659]: I1108 00:24:07.841549 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78mcs\" (UniqueName: \"kubernetes.io/projected/9444dacb-f65b-4307-a4b9-604ca966e1e4-kube-api-access-78mcs\") pod \"tigera-operator-7dcd859c48-zl7h8\" (UID: \"9444dacb-f65b-4307-a4b9-604ca966e1e4\") " pod="tigera-operator/tigera-operator-7dcd859c48-zl7h8" Nov 8 00:24:07.842049 kubelet[2659]: I1108 00:24:07.841634 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9444dacb-f65b-4307-a4b9-604ca966e1e4-var-lib-calico\") pod \"tigera-operator-7dcd859c48-zl7h8\" (UID: \"9444dacb-f65b-4307-a4b9-604ca966e1e4\") " pod="tigera-operator/tigera-operator-7dcd859c48-zl7h8" Nov 8 00:24:08.023756 kubelet[2659]: E1108 00:24:08.023626 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:08.024384 containerd[1574]: time="2025-11-08T00:24:08.024200295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2mtct,Uid:c8ead742-59e0-41b0-a421-58f9f9cef9fb,Namespace:kube-system,Attempt:0,}" Nov 8 00:24:08.049251 containerd[1574]: time="2025-11-08T00:24:08.048545525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:08.049251 containerd[1574]: time="2025-11-08T00:24:08.049230624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:08.049251 containerd[1574]: time="2025-11-08T00:24:08.049245246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:08.049571 containerd[1574]: time="2025-11-08T00:24:08.049346798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:08.091563 containerd[1574]: time="2025-11-08T00:24:08.091505985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2mtct,Uid:c8ead742-59e0-41b0-a421-58f9f9cef9fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3de67d8bbb29c069b42d3be472ecf5fccd0cc702fba990e5001570e9365d3a2d\"" Nov 8 00:24:08.092336 kubelet[2659]: E1108 00:24:08.092295 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:08.094913 containerd[1574]: time="2025-11-08T00:24:08.094876770Z" level=info msg="CreateContainer within sandbox \"3de67d8bbb29c069b42d3be472ecf5fccd0cc702fba990e5001570e9365d3a2d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:24:08.109460 containerd[1574]: time="2025-11-08T00:24:08.109409444Z" level=info msg="CreateContainer within sandbox \"3de67d8bbb29c069b42d3be472ecf5fccd0cc702fba990e5001570e9365d3a2d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"52536ef655834e6f520da5ce1db39ccda02dbe572c4edc0c1a001bcdc0c82ec2\"" Nov 8 00:24:08.109884 containerd[1574]: time="2025-11-08T00:24:08.109853880Z" level=info msg="StartContainer for \"52536ef655834e6f520da5ce1db39ccda02dbe572c4edc0c1a001bcdc0c82ec2\"" Nov 8 00:24:08.132989 containerd[1574]: time="2025-11-08T00:24:08.132942996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zl7h8,Uid:9444dacb-f65b-4307-a4b9-604ca966e1e4,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:24:08.160102 containerd[1574]: time="2025-11-08T00:24:08.159843406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:08.160102 containerd[1574]: time="2025-11-08T00:24:08.159905224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:08.160328 containerd[1574]: time="2025-11-08T00:24:08.160119935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:08.160328 containerd[1574]: time="2025-11-08T00:24:08.160213604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:08.170068 kubelet[2659]: E1108 00:24:08.170031 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:08.210394 containerd[1574]: time="2025-11-08T00:24:08.210353699Z" level=info msg="StartContainer for \"52536ef655834e6f520da5ce1db39ccda02dbe572c4edc0c1a001bcdc0c82ec2\" returns successfully" Nov 8 00:24:08.230594 containerd[1574]: time="2025-11-08T00:24:08.230551542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zl7h8,Uid:9444dacb-f65b-4307-a4b9-604ca966e1e4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"321708acd37938462e90f46388c049d6fe65b09fe9386a04171bbacbd14576fe\"" Nov 8 00:24:08.232659 containerd[1574]: time="2025-11-08T00:24:08.232628030Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:24:08.744937 kubelet[2659]: E1108 00:24:08.744903 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:08.745482 kubelet[2659]: E1108 00:24:08.745101 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:08.761813 kubelet[2659]: I1108 00:24:08.761745 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2mtct" podStartSLOduration=1.761725163 podStartE2EDuration="1.761725163s" podCreationTimestamp="2025-11-08 00:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:24:08.754282199 +0000 UTC m=+8.133402193" watchObservedRunningTime="2025-11-08 00:24:08.761725163 +0000 UTC m=+8.140845157" Nov 8 00:24:08.954789 kubelet[2659]: E1108 00:24:08.954743 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:09.258700 kubelet[2659]: E1108 00:24:09.258656 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:09.328257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3608812945.mount: Deactivated successfully. Nov 8 00:24:09.747016 kubelet[2659]: E1108 00:24:09.746970 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:09.747599 kubelet[2659]: E1108 00:24:09.747572 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:09.747871 kubelet[2659]: E1108 00:24:09.747833 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:09.924779 containerd[1574]: time="2025-11-08T00:24:09.923542646Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:24:09.924779 containerd[1574]: time="2025-11-08T00:24:09.923584163Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:09.925313 containerd[1574]: time="2025-11-08T00:24:09.925102863Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:09.934455 containerd[1574]: time="2025-11-08T00:24:09.933581523Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:09.935747 containerd[1574]: time="2025-11-08T00:24:09.935698394Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.703037822s" Nov 8 00:24:09.935923 containerd[1574]: time="2025-11-08T00:24:09.935736074Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:24:09.938112 containerd[1574]: time="2025-11-08T00:24:09.937969061Z" level=info msg="CreateContainer within sandbox \"321708acd37938462e90f46388c049d6fe65b09fe9386a04171bbacbd14576fe\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:24:09.950325 containerd[1574]: time="2025-11-08T00:24:09.950279195Z" level=info msg="CreateContainer within sandbox \"321708acd37938462e90f46388c049d6fe65b09fe9386a04171bbacbd14576fe\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e5c12b386ddabe5aa171699d4a22c0e4a79465d398334a3e032cd5f762d6adf0\"" Nov 8 00:24:09.950760 containerd[1574]: time="2025-11-08T00:24:09.950730231Z" level=info msg="StartContainer for \"e5c12b386ddabe5aa171699d4a22c0e4a79465d398334a3e032cd5f762d6adf0\"" Nov 8 00:24:10.005101 containerd[1574]: time="2025-11-08T00:24:10.004987367Z" level=info msg="StartContainer for \"e5c12b386ddabe5aa171699d4a22c0e4a79465d398334a3e032cd5f762d6adf0\" returns successfully" Nov 8 00:24:10.480143 update_engine[1561]: I20251108 00:24:10.480011 1561 update_attempter.cc:509] Updating boot flags... Nov 8 00:24:10.510874 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3006) Nov 8 00:24:10.555531 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3005) Nov 8 00:24:10.587463 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3005) Nov 8 00:24:10.757825 kubelet[2659]: I1108 00:24:10.757660 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-zl7h8" podStartSLOduration=2.052805191 podStartE2EDuration="3.757642345s" podCreationTimestamp="2025-11-08 00:24:07 +0000 UTC" firstStartedPulling="2025-11-08 00:24:08.231741042 +0000 UTC m=+7.610861046" lastFinishedPulling="2025-11-08 00:24:09.936578206 +0000 UTC m=+9.315698200" observedRunningTime="2025-11-08 00:24:10.757486191 +0000 UTC m=+10.136606195" watchObservedRunningTime="2025-11-08 00:24:10.757642345 +0000 UTC m=+10.136762339" Nov 8 00:24:12.115328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5c12b386ddabe5aa171699d4a22c0e4a79465d398334a3e032cd5f762d6adf0-rootfs.mount: Deactivated successfully. Nov 8 00:24:12.150536 containerd[1574]: time="2025-11-08T00:24:12.147106521Z" level=info msg="shim disconnected" id=e5c12b386ddabe5aa171699d4a22c0e4a79465d398334a3e032cd5f762d6adf0 namespace=k8s.io Nov 8 00:24:12.151344 containerd[1574]: time="2025-11-08T00:24:12.151101037Z" level=warning msg="cleaning up after shim disconnected" id=e5c12b386ddabe5aa171699d4a22c0e4a79465d398334a3e032cd5f762d6adf0 namespace=k8s.io Nov 8 00:24:12.151344 containerd[1574]: time="2025-11-08T00:24:12.151143347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:12.756826 kubelet[2659]: I1108 00:24:12.756771 2659 scope.go:117] "RemoveContainer" containerID="e5c12b386ddabe5aa171699d4a22c0e4a79465d398334a3e032cd5f762d6adf0" Nov 8 00:24:12.760577 containerd[1574]: time="2025-11-08T00:24:12.760476155Z" level=info msg="CreateContainer within sandbox \"321708acd37938462e90f46388c049d6fe65b09fe9386a04171bbacbd14576fe\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 8 00:24:12.792149 containerd[1574]: time="2025-11-08T00:24:12.792105461Z" level=info msg="CreateContainer within sandbox \"321708acd37938462e90f46388c049d6fe65b09fe9386a04171bbacbd14576fe\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b55d844a6abd22ae07d2b96d1dc2cc0555f79b01b21f14b43895f39b77bd6839\"" Nov 8 00:24:12.794502 containerd[1574]: time="2025-11-08T00:24:12.794362444Z" level=info msg="StartContainer for \"b55d844a6abd22ae07d2b96d1dc2cc0555f79b01b21f14b43895f39b77bd6839\"" Nov 8 00:24:12.908379 containerd[1574]: time="2025-11-08T00:24:12.908291352Z" level=info msg="StartContainer for \"b55d844a6abd22ae07d2b96d1dc2cc0555f79b01b21f14b43895f39b77bd6839\" returns successfully" Nov 8 00:24:15.245568 sudo[1761]: pam_unix(sudo:session): session closed for user root Nov 8 00:24:15.248307 sshd[1755]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:15.253695 systemd[1]: sshd@6-10.0.0.93:22-10.0.0.1:49280.service: Deactivated successfully. Nov 8 00:24:15.256503 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:24:15.257416 systemd-logind[1557]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:24:15.259038 systemd-logind[1557]: Removed session 7. Nov 8 00:24:20.332035 kubelet[2659]: I1108 00:24:20.331957 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ed8d50-5345-4d84-9b70-cb7a6f4a3178-tigera-ca-bundle\") pod \"calico-typha-7bbffc5489-nx5b7\" (UID: \"08ed8d50-5345-4d84-9b70-cb7a6f4a3178\") " pod="calico-system/calico-typha-7bbffc5489-nx5b7" Nov 8 00:24:20.332035 kubelet[2659]: I1108 00:24:20.332010 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/08ed8d50-5345-4d84-9b70-cb7a6f4a3178-typha-certs\") pod \"calico-typha-7bbffc5489-nx5b7\" (UID: \"08ed8d50-5345-4d84-9b70-cb7a6f4a3178\") " pod="calico-system/calico-typha-7bbffc5489-nx5b7" Nov 8 00:24:20.332035 kubelet[2659]: I1108 00:24:20.332030 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jpd9\" (UniqueName: \"kubernetes.io/projected/08ed8d50-5345-4d84-9b70-cb7a6f4a3178-kube-api-access-5jpd9\") pod \"calico-typha-7bbffc5489-nx5b7\" (UID: \"08ed8d50-5345-4d84-9b70-cb7a6f4a3178\") " pod="calico-system/calico-typha-7bbffc5489-nx5b7" Nov 8 00:24:20.534088 kubelet[2659]: I1108 00:24:20.533995 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf8r8\" (UniqueName: \"kubernetes.io/projected/2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07-kube-api-access-lf8r8\") pod \"calico-node-zs9pp\" (UID: \"2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07\") " pod="calico-system/calico-node-zs9pp" Nov 8 00:24:20.534263 kubelet[2659]: I1108 00:24:20.534148 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07-policysync\") pod \"calico-node-zs9pp\" (UID: \"2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07\") " pod="calico-system/calico-node-zs9pp" Nov 8 00:24:20.534263 kubelet[2659]: I1108 00:24:20.534190 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07-var-lib-calico\") pod \"calico-node-zs9pp\" (UID: \"2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07\") " pod="calico-system/calico-node-zs9pp" Nov 8 00:24:20.534263 kubelet[2659]: I1108 00:24:20.534231 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07-lib-modules\") pod \"calico-node-zs9pp\" (UID: \"2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07\") " pod="calico-system/calico-node-zs9pp" Nov 8 00:24:20.534263 kubelet[2659]: I1108 00:24:20.534256 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07-node-certs\") pod \"calico-node-zs9pp\" (UID: \"2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07\") " pod="calico-system/calico-node-zs9pp" Nov 8 00:24:20.534371 kubelet[2659]: I1108 00:24:20.534293 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07-cni-bin-dir\") pod \"calico-node-zs9pp\" (UID: \"2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07\") " pod="calico-system/calico-node-zs9pp" Nov 8 00:24:20.534371 kubelet[2659]: I1108 00:24:20.534320 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07-tigera-ca-bundle\") pod \"calico-node-zs9pp\" (UID: \"2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07\") " pod="calico-system/calico-node-zs9pp" Nov 8 00:24:20.534371 kubelet[2659]: I1108 00:24:20.534349 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07-flexvol-driver-host\") pod \"calico-node-zs9pp\" (UID: \"2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07\") " pod="calico-system/calico-node-zs9pp" Nov 8 00:24:20.534469 kubelet[2659]: I1108 00:24:20.534381 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07-xtables-lock\") pod \"calico-node-zs9pp\" (UID: \"2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07\") " pod="calico-system/calico-node-zs9pp" Nov 8 00:24:20.534469 kubelet[2659]: I1108 00:24:20.534453 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07-cni-log-dir\") pod \"calico-node-zs9pp\" (UID: \"2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07\") " pod="calico-system/calico-node-zs9pp" Nov 8 00:24:20.534522 kubelet[2659]: I1108 00:24:20.534481 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07-cni-net-dir\") pod \"calico-node-zs9pp\" (UID: \"2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07\") " pod="calico-system/calico-node-zs9pp" Nov 8 00:24:20.534522 kubelet[2659]: I1108 00:24:20.534514 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07-var-run-calico\") pod \"calico-node-zs9pp\" (UID: \"2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07\") " pod="calico-system/calico-node-zs9pp" Nov 8 00:24:20.606469 kubelet[2659]: E1108 00:24:20.606287 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:20.607749 containerd[1574]: time="2025-11-08T00:24:20.607715469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bbffc5489-nx5b7,Uid:08ed8d50-5345-4d84-9b70-cb7a6f4a3178,Namespace:calico-system,Attempt:0,}" Nov 8 00:24:20.638161 kubelet[2659]: E1108 00:24:20.638110 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.638161 kubelet[2659]: W1108 00:24:20.638144 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.638380 kubelet[2659]: E1108 00:24:20.638216 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.638776 containerd[1574]: time="2025-11-08T00:24:20.638501002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:20.641179 kubelet[2659]: E1108 00:24:20.641120 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.641179 kubelet[2659]: W1108 00:24:20.641151 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.641310 kubelet[2659]: E1108 00:24:20.641203 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.642502 containerd[1574]: time="2025-11-08T00:24:20.638904356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:20.642502 containerd[1574]: time="2025-11-08T00:24:20.640080355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:20.642502 containerd[1574]: time="2025-11-08T00:24:20.640223104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:20.650310 kubelet[2659]: E1108 00:24:20.650195 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.650310 kubelet[2659]: W1108 00:24:20.650229 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.650310 kubelet[2659]: E1108 00:24:20.650250 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.709857 containerd[1574]: time="2025-11-08T00:24:20.709182449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bbffc5489-nx5b7,Uid:08ed8d50-5345-4d84-9b70-cb7a6f4a3178,Namespace:calico-system,Attempt:0,} returns sandbox id \"2dea2492f4c742c80211c3dc0d5762d1bc3e58f06ac8265fbf64bd5417f0244c\"" Nov 8 00:24:20.710573 kubelet[2659]: E1108 00:24:20.709942 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:20.711543 containerd[1574]: time="2025-11-08T00:24:20.711504236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:24:20.736707 kubelet[2659]: E1108 00:24:20.736626 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mbvz" podUID="9805d816-e7c8-479d-9360-d3b3efa64586" Nov 8 00:24:20.785880 kubelet[2659]: E1108 00:24:20.785827 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:20.786416 containerd[1574]: time="2025-11-08T00:24:20.786373392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zs9pp,Uid:2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07,Namespace:calico-system,Attempt:0,}" Nov 8 00:24:20.822889 kubelet[2659]: E1108 00:24:20.822838 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.822889 kubelet[2659]: W1108 00:24:20.822873 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.822889 kubelet[2659]: E1108 00:24:20.822904 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.827284 kubelet[2659]: E1108 00:24:20.825328 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.827284 kubelet[2659]: W1108 00:24:20.825346 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.827284 kubelet[2659]: E1108 00:24:20.825358 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.827284 kubelet[2659]: E1108 00:24:20.825896 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.827284 kubelet[2659]: W1108 00:24:20.825907 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.827284 kubelet[2659]: E1108 00:24:20.825918 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.827284 kubelet[2659]: E1108 00:24:20.826560 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.827284 kubelet[2659]: W1108 00:24:20.826572 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.827284 kubelet[2659]: E1108 00:24:20.826584 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.827284 kubelet[2659]: E1108 00:24:20.827027 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.829721 kubelet[2659]: W1108 00:24:20.827060 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.829721 kubelet[2659]: E1108 00:24:20.827072 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.829721 kubelet[2659]: E1108 00:24:20.827369 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.829721 kubelet[2659]: W1108 00:24:20.827380 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.829721 kubelet[2659]: E1108 00:24:20.827390 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.829721 kubelet[2659]: E1108 00:24:20.828611 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.829721 kubelet[2659]: W1108 00:24:20.828623 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.829721 kubelet[2659]: E1108 00:24:20.828634 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.830018 kubelet[2659]: E1108 00:24:20.829771 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.830018 kubelet[2659]: W1108 00:24:20.829785 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.830018 kubelet[2659]: E1108 00:24:20.829810 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.830509 kubelet[2659]: E1108 00:24:20.830407 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.830621 containerd[1574]: time="2025-11-08T00:24:20.827566247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:20.830621 containerd[1574]: time="2025-11-08T00:24:20.828216129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:20.830621 containerd[1574]: time="2025-11-08T00:24:20.828230445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:20.830621 containerd[1574]: time="2025-11-08T00:24:20.830337607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:20.830833 kubelet[2659]: W1108 00:24:20.830808 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.830928 kubelet[2659]: E1108 00:24:20.830910 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.831539 kubelet[2659]: E1108 00:24:20.831523 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.831636 kubelet[2659]: W1108 00:24:20.831620 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.831895 kubelet[2659]: E1108 00:24:20.831875 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.832275 kubelet[2659]: E1108 00:24:20.832259 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.832355 kubelet[2659]: W1108 00:24:20.832342 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.832598 kubelet[2659]: E1108 00:24:20.832580 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.833524 kubelet[2659]: E1108 00:24:20.833509 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.835028 kubelet[2659]: W1108 00:24:20.835003 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.835249 kubelet[2659]: E1108 00:24:20.835094 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.835541 kubelet[2659]: E1108 00:24:20.835527 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.835676 kubelet[2659]: W1108 00:24:20.835607 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.835676 kubelet[2659]: E1108 00:24:20.835623 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.836246 kubelet[2659]: E1108 00:24:20.836229 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.836321 kubelet[2659]: W1108 00:24:20.836308 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.836464 kubelet[2659]: E1108 00:24:20.836370 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.836846 kubelet[2659]: E1108 00:24:20.836832 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.836982 kubelet[2659]: W1108 00:24:20.836914 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.836982 kubelet[2659]: E1108 00:24:20.836930 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.837314 kubelet[2659]: E1108 00:24:20.837296 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.837550 kubelet[2659]: W1108 00:24:20.837382 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.837550 kubelet[2659]: E1108 00:24:20.837401 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.837741 kubelet[2659]: E1108 00:24:20.837727 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.837869 kubelet[2659]: W1108 00:24:20.837804 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.837869 kubelet[2659]: E1108 00:24:20.837822 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.838322 kubelet[2659]: E1108 00:24:20.838307 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.838467 kubelet[2659]: W1108 00:24:20.838406 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.838551 kubelet[2659]: E1108 00:24:20.838517 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.839173 kubelet[2659]: E1108 00:24:20.839137 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.839451 kubelet[2659]: W1108 00:24:20.839377 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.839616 kubelet[2659]: E1108 00:24:20.839543 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.840249 kubelet[2659]: E1108 00:24:20.840201 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.840249 kubelet[2659]: W1108 00:24:20.840216 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.840249 kubelet[2659]: E1108 00:24:20.840228 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.840871 kubelet[2659]: E1108 00:24:20.840853 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.840947 kubelet[2659]: W1108 00:24:20.840933 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.841048 kubelet[2659]: E1108 00:24:20.841016 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.841161 kubelet[2659]: I1108 00:24:20.841138 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9805d816-e7c8-479d-9360-d3b3efa64586-registration-dir\") pod \"csi-node-driver-5mbvz\" (UID: \"9805d816-e7c8-479d-9360-d3b3efa64586\") " pod="calico-system/csi-node-driver-5mbvz" Nov 8 00:24:20.841569 kubelet[2659]: E1108 00:24:20.841550 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.841668 kubelet[2659]: W1108 00:24:20.841649 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.841745 kubelet[2659]: E1108 00:24:20.841730 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.842108 kubelet[2659]: E1108 00:24:20.842090 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.842192 kubelet[2659]: W1108 00:24:20.842177 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.842292 kubelet[2659]: E1108 00:24:20.842273 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.842569 kubelet[2659]: I1108 00:24:20.842548 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9805d816-e7c8-479d-9360-d3b3efa64586-kubelet-dir\") pod \"csi-node-driver-5mbvz\" (UID: \"9805d816-e7c8-479d-9360-d3b3efa64586\") " pod="calico-system/csi-node-driver-5mbvz" Nov 8 00:24:20.842714 kubelet[2659]: E1108 00:24:20.842698 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.842786 kubelet[2659]: W1108 00:24:20.842771 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.842893 kubelet[2659]: E1108 00:24:20.842875 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.843313 kubelet[2659]: E1108 00:24:20.843298 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.843387 kubelet[2659]: W1108 00:24:20.843373 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.843508 kubelet[2659]: E1108 00:24:20.843494 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.843848 kubelet[2659]: E1108 00:24:20.843833 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.843931 kubelet[2659]: W1108 00:24:20.843916 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.844018 kubelet[2659]: E1108 00:24:20.844003 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.844220 kubelet[2659]: I1108 00:24:20.844203 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9805d816-e7c8-479d-9360-d3b3efa64586-socket-dir\") pod \"csi-node-driver-5mbvz\" (UID: \"9805d816-e7c8-479d-9360-d3b3efa64586\") " pod="calico-system/csi-node-driver-5mbvz" Nov 8 00:24:20.845015 kubelet[2659]: E1108 00:24:20.845002 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.845086 kubelet[2659]: W1108 00:24:20.845075 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.845141 kubelet[2659]: E1108 00:24:20.845131 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.845537 kubelet[2659]: E1108 00:24:20.845522 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.846776 kubelet[2659]: W1108 00:24:20.846558 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.846776 kubelet[2659]: E1108 00:24:20.846586 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.846776 kubelet[2659]: I1108 00:24:20.846602 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjcwq\" (UniqueName: \"kubernetes.io/projected/9805d816-e7c8-479d-9360-d3b3efa64586-kube-api-access-pjcwq\") pod \"csi-node-driver-5mbvz\" (UID: \"9805d816-e7c8-479d-9360-d3b3efa64586\") " pod="calico-system/csi-node-driver-5mbvz" Nov 8 00:24:20.847142 kubelet[2659]: E1108 00:24:20.847106 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.847142 kubelet[2659]: W1108 00:24:20.847140 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.847348 kubelet[2659]: E1108 00:24:20.847319 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.847412 kubelet[2659]: I1108 00:24:20.847369 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9805d816-e7c8-479d-9360-d3b3efa64586-varrun\") pod \"csi-node-driver-5mbvz\" (UID: \"9805d816-e7c8-479d-9360-d3b3efa64586\") " pod="calico-system/csi-node-driver-5mbvz" Nov 8 00:24:20.847533 kubelet[2659]: E1108 00:24:20.847514 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.847533 kubelet[2659]: W1108 00:24:20.847530 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.847704 kubelet[2659]: E1108 00:24:20.847680 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.848023 kubelet[2659]: E1108 00:24:20.847791 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.848023 kubelet[2659]: W1108 00:24:20.847815 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.848023 kubelet[2659]: E1108 00:24:20.847828 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.848110 kubelet[2659]: E1108 00:24:20.848046 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.848110 kubelet[2659]: W1108 00:24:20.848056 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.848110 kubelet[2659]: E1108 00:24:20.848071 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.848361 kubelet[2659]: E1108 00:24:20.848331 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.848361 kubelet[2659]: W1108 00:24:20.848351 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.848361 kubelet[2659]: E1108 00:24:20.848363 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.848660 kubelet[2659]: E1108 00:24:20.848636 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.848660 kubelet[2659]: W1108 00:24:20.848653 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.848660 kubelet[2659]: E1108 00:24:20.848662 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.848909 kubelet[2659]: E1108 00:24:20.848887 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.848909 kubelet[2659]: W1108 00:24:20.848905 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.848962 kubelet[2659]: E1108 00:24:20.848915 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.887523 containerd[1574]: time="2025-11-08T00:24:20.887336944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zs9pp,Uid:2a7bd6b0-4560-4f5a-98e7-7c6fcdde0c07,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ec48ee0ef2c5a2c64ecb3db918a179261d2ef2bdadcba53dfe6d1fed5f0e5c7\"" Nov 8 00:24:20.889085 kubelet[2659]: E1108 00:24:20.889031 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:20.948711 kubelet[2659]: E1108 00:24:20.948664 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.948711 kubelet[2659]: W1108 00:24:20.948692 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.948711 kubelet[2659]: E1108 00:24:20.948716 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.949106 kubelet[2659]: E1108 00:24:20.949082 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.949106 kubelet[2659]: W1108 00:24:20.949104 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.949207 kubelet[2659]: E1108 00:24:20.949126 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.949440 kubelet[2659]: E1108 00:24:20.949392 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.949440 kubelet[2659]: W1108 00:24:20.949409 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.949440 kubelet[2659]: E1108 00:24:20.949451 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.950151 kubelet[2659]: E1108 00:24:20.949872 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.950151 kubelet[2659]: W1108 00:24:20.949905 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.950151 kubelet[2659]: E1108 00:24:20.949941 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.950411 kubelet[2659]: E1108 00:24:20.950377 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.950482 kubelet[2659]: W1108 00:24:20.950411 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.950544 kubelet[2659]: E1108 00:24:20.950512 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.950957 kubelet[2659]: E1108 00:24:20.950923 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.950957 kubelet[2659]: W1108 00:24:20.950939 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.951279 kubelet[2659]: E1108 00:24:20.951259 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.951279 kubelet[2659]: E1108 00:24:20.951265 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.951356 kubelet[2659]: W1108 00:24:20.951275 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.951409 kubelet[2659]: E1108 00:24:20.951367 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.951597 kubelet[2659]: E1108 00:24:20.951578 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.951597 kubelet[2659]: W1108 00:24:20.951595 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.951666 kubelet[2659]: E1108 00:24:20.951635 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.951903 kubelet[2659]: E1108 00:24:20.951874 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.951903 kubelet[2659]: W1108 00:24:20.951890 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.951981 kubelet[2659]: E1108 00:24:20.951922 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.952148 kubelet[2659]: E1108 00:24:20.952129 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.952148 kubelet[2659]: W1108 00:24:20.952142 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.952224 kubelet[2659]: E1108 00:24:20.952172 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.952387 kubelet[2659]: E1108 00:24:20.952367 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.952387 kubelet[2659]: W1108 00:24:20.952380 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.952485 kubelet[2659]: E1108 00:24:20.952397 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.952682 kubelet[2659]: E1108 00:24:20.952662 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.952682 kubelet[2659]: W1108 00:24:20.952675 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.952769 kubelet[2659]: E1108 00:24:20.952707 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.952950 kubelet[2659]: E1108 00:24:20.952924 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.952950 kubelet[2659]: W1108 00:24:20.952940 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.953058 kubelet[2659]: E1108 00:24:20.953011 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.953228 kubelet[2659]: E1108 00:24:20.953199 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.953228 kubelet[2659]: W1108 00:24:20.953216 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.953288 kubelet[2659]: E1108 00:24:20.953251 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.953510 kubelet[2659]: E1108 00:24:20.953493 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.953510 kubelet[2659]: W1108 00:24:20.953506 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.953601 kubelet[2659]: E1108 00:24:20.953543 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.953820 kubelet[2659]: E1108 00:24:20.953795 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.953880 kubelet[2659]: W1108 00:24:20.953836 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.953880 kubelet[2659]: E1108 00:24:20.953873 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.954196 kubelet[2659]: E1108 00:24:20.954175 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.954196 kubelet[2659]: W1108 00:24:20.954189 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.954279 kubelet[2659]: E1108 00:24:20.954257 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.954528 kubelet[2659]: E1108 00:24:20.954499 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.954528 kubelet[2659]: W1108 00:24:20.954513 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.954609 kubelet[2659]: E1108 00:24:20.954546 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.954785 kubelet[2659]: E1108 00:24:20.954760 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.954842 kubelet[2659]: W1108 00:24:20.954775 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.954930 kubelet[2659]: E1108 00:24:20.954904 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.955099 kubelet[2659]: E1108 00:24:20.955082 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.955099 kubelet[2659]: W1108 00:24:20.955096 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.955187 kubelet[2659]: E1108 00:24:20.955154 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.955630 kubelet[2659]: E1108 00:24:20.955611 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.955630 kubelet[2659]: W1108 00:24:20.955626 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.955810 kubelet[2659]: E1108 00:24:20.955669 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.956072 kubelet[2659]: E1108 00:24:20.956052 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.956072 kubelet[2659]: W1108 00:24:20.956068 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.956156 kubelet[2659]: E1108 00:24:20.956111 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.956358 kubelet[2659]: E1108 00:24:20.956336 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.956358 kubelet[2659]: W1108 00:24:20.956353 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.956463 kubelet[2659]: E1108 00:24:20.956372 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.956968 kubelet[2659]: E1108 00:24:20.956942 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.956968 kubelet[2659]: W1108 00:24:20.956960 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.957053 kubelet[2659]: E1108 00:24:20.956978 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.957292 kubelet[2659]: E1108 00:24:20.957274 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.957292 kubelet[2659]: W1108 00:24:20.957288 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.957380 kubelet[2659]: E1108 00:24:20.957299 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:20.965852 kubelet[2659]: E1108 00:24:20.965820 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:20.965852 kubelet[2659]: W1108 00:24:20.965835 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:20.965852 kubelet[2659]: E1108 00:24:20.965846 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:22.720373 kubelet[2659]: E1108 00:24:22.720325 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mbvz" podUID="9805d816-e7c8-479d-9360-d3b3efa64586" Nov 8 00:24:22.787662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200119979.mount: Deactivated successfully. Nov 8 00:24:23.096004 containerd[1574]: time="2025-11-08T00:24:23.095962460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:23.096766 containerd[1574]: time="2025-11-08T00:24:23.096703980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:24:23.097805 containerd[1574]: time="2025-11-08T00:24:23.097773950Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:23.099722 containerd[1574]: time="2025-11-08T00:24:23.099693400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:23.100313 containerd[1574]: time="2025-11-08T00:24:23.100281549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.388747952s" Nov 8 00:24:23.100348 containerd[1574]: time="2025-11-08T00:24:23.100314147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:24:23.104008 containerd[1574]: time="2025-11-08T00:24:23.103981144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:24:23.113251 containerd[1574]: time="2025-11-08T00:24:23.113062125Z" level=info msg="CreateContainer within sandbox \"2dea2492f4c742c80211c3dc0d5762d1bc3e58f06ac8265fbf64bd5417f0244c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:24:23.127362 containerd[1574]: time="2025-11-08T00:24:23.127323837Z" level=info msg="CreateContainer within sandbox \"2dea2492f4c742c80211c3dc0d5762d1bc3e58f06ac8265fbf64bd5417f0244c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b17dce5cf5015a5beaaa8396ad21051e950c26e5fcff1ea3f053081f5344890b\"" Nov 8 00:24:23.127789 containerd[1574]: time="2025-11-08T00:24:23.127760749Z" level=info msg="StartContainer for \"b17dce5cf5015a5beaaa8396ad21051e950c26e5fcff1ea3f053081f5344890b\"" Nov 8 00:24:23.208638 containerd[1574]: time="2025-11-08T00:24:23.208158254Z" level=info msg="StartContainer for \"b17dce5cf5015a5beaaa8396ad21051e950c26e5fcff1ea3f053081f5344890b\" returns successfully" Nov 8 00:24:23.800994 kubelet[2659]: E1108 00:24:23.800937 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:23.825263 kubelet[2659]: I1108 00:24:23.825137 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bbffc5489-nx5b7" podStartSLOduration=1.432553797 podStartE2EDuration="3.825116801s" podCreationTimestamp="2025-11-08 00:24:20 +0000 UTC" firstStartedPulling="2025-11-08 00:24:20.711236599 +0000 UTC m=+20.090356593" lastFinishedPulling="2025-11-08 00:24:23.103799603 +0000 UTC m=+22.482919597" observedRunningTime="2025-11-08 00:24:23.824779055 +0000 UTC m=+23.203899059" watchObservedRunningTime="2025-11-08 00:24:23.825116801 +0000 UTC m=+23.204236795" Nov 8 00:24:23.857291 kubelet[2659]: E1108 00:24:23.857254 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.857291 kubelet[2659]: W1108 00:24:23.857283 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.857489 kubelet[2659]: E1108 00:24:23.857323 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.857669 kubelet[2659]: E1108 00:24:23.857650 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.857669 kubelet[2659]: W1108 00:24:23.857664 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.857740 kubelet[2659]: E1108 00:24:23.857674 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.858013 kubelet[2659]: E1108 00:24:23.857982 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.858013 kubelet[2659]: W1108 00:24:23.857997 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.858013 kubelet[2659]: E1108 00:24:23.858006 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.861402 kubelet[2659]: E1108 00:24:23.861372 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.861402 kubelet[2659]: W1108 00:24:23.861388 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.861402 kubelet[2659]: E1108 00:24:23.861399 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.861707 kubelet[2659]: E1108 00:24:23.861681 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.861707 kubelet[2659]: W1108 00:24:23.861695 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.861707 kubelet[2659]: E1108 00:24:23.861705 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.861952 kubelet[2659]: E1108 00:24:23.861927 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.861952 kubelet[2659]: W1108 00:24:23.861940 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.861952 kubelet[2659]: E1108 00:24:23.861949 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.862165 kubelet[2659]: E1108 00:24:23.862149 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.862165 kubelet[2659]: W1108 00:24:23.862161 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.862230 kubelet[2659]: E1108 00:24:23.862171 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.862405 kubelet[2659]: E1108 00:24:23.862390 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.862405 kubelet[2659]: W1108 00:24:23.862403 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.862498 kubelet[2659]: E1108 00:24:23.862443 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.862731 kubelet[2659]: E1108 00:24:23.862713 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.862731 kubelet[2659]: W1108 00:24:23.862726 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.862802 kubelet[2659]: E1108 00:24:23.862736 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.862968 kubelet[2659]: E1108 00:24:23.862949 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.862968 kubelet[2659]: W1108 00:24:23.862962 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.862968 kubelet[2659]: E1108 00:24:23.862970 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.863217 kubelet[2659]: E1108 00:24:23.863201 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.863217 kubelet[2659]: W1108 00:24:23.863213 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.863269 kubelet[2659]: E1108 00:24:23.863222 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.863472 kubelet[2659]: E1108 00:24:23.863456 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.863472 kubelet[2659]: W1108 00:24:23.863468 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.863532 kubelet[2659]: E1108 00:24:23.863478 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.863736 kubelet[2659]: E1108 00:24:23.863719 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.863736 kubelet[2659]: W1108 00:24:23.863731 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.863806 kubelet[2659]: E1108 00:24:23.863741 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.864000 kubelet[2659]: E1108 00:24:23.863984 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.864000 kubelet[2659]: W1108 00:24:23.863996 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.864051 kubelet[2659]: E1108 00:24:23.864005 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.864336 kubelet[2659]: E1108 00:24:23.864288 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.864336 kubelet[2659]: W1108 00:24:23.864324 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.864482 kubelet[2659]: E1108 00:24:23.864356 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.870888 kubelet[2659]: E1108 00:24:23.870860 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.870888 kubelet[2659]: W1108 00:24:23.870878 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.870975 kubelet[2659]: E1108 00:24:23.870891 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.871271 kubelet[2659]: E1108 00:24:23.871238 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.871271 kubelet[2659]: W1108 00:24:23.871266 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.871352 kubelet[2659]: E1108 00:24:23.871298 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.871752 kubelet[2659]: E1108 00:24:23.871734 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.871752 kubelet[2659]: W1108 00:24:23.871747 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.871834 kubelet[2659]: E1108 00:24:23.871758 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.872292 kubelet[2659]: E1108 00:24:23.872272 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.872292 kubelet[2659]: W1108 00:24:23.872288 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.872369 kubelet[2659]: E1108 00:24:23.872307 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.872615 kubelet[2659]: E1108 00:24:23.872597 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.872615 kubelet[2659]: W1108 00:24:23.872611 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.872694 kubelet[2659]: E1108 00:24:23.872628 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.872891 kubelet[2659]: E1108 00:24:23.872871 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.872891 kubelet[2659]: W1108 00:24:23.872886 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.872952 kubelet[2659]: E1108 00:24:23.872904 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.873150 kubelet[2659]: E1108 00:24:23.873117 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.873150 kubelet[2659]: W1108 00:24:23.873145 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.873207 kubelet[2659]: E1108 00:24:23.873161 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.873449 kubelet[2659]: E1108 00:24:23.873407 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.873449 kubelet[2659]: W1108 00:24:23.873444 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.873529 kubelet[2659]: E1108 00:24:23.873460 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.873716 kubelet[2659]: E1108 00:24:23.873699 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.873716 kubelet[2659]: W1108 00:24:23.873712 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.873783 kubelet[2659]: E1108 00:24:23.873727 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.874026 kubelet[2659]: E1108 00:24:23.873996 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.874026 kubelet[2659]: W1108 00:24:23.874012 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.874095 kubelet[2659]: E1108 00:24:23.874030 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.874563 kubelet[2659]: E1108 00:24:23.874514 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.874563 kubelet[2659]: W1108 00:24:23.874552 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.874643 kubelet[2659]: E1108 00:24:23.874583 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.874914 kubelet[2659]: E1108 00:24:23.874884 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.874914 kubelet[2659]: W1108 00:24:23.874902 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.874972 kubelet[2659]: E1108 00:24:23.874918 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.875190 kubelet[2659]: E1108 00:24:23.875162 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.875190 kubelet[2659]: W1108 00:24:23.875176 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.875260 kubelet[2659]: E1108 00:24:23.875193 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.875474 kubelet[2659]: E1108 00:24:23.875456 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.875474 kubelet[2659]: W1108 00:24:23.875469 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.875556 kubelet[2659]: E1108 00:24:23.875502 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.875717 kubelet[2659]: E1108 00:24:23.875700 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.875717 kubelet[2659]: W1108 00:24:23.875712 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.875781 kubelet[2659]: E1108 00:24:23.875743 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.875946 kubelet[2659]: E1108 00:24:23.875927 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.875946 kubelet[2659]: W1108 00:24:23.875939 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.876008 kubelet[2659]: E1108 00:24:23.875954 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.876248 kubelet[2659]: E1108 00:24:23.876228 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.876248 kubelet[2659]: W1108 00:24:23.876243 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.876312 kubelet[2659]: E1108 00:24:23.876254 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:23.876911 kubelet[2659]: E1108 00:24:23.876882 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:24:23.876911 kubelet[2659]: W1108 00:24:23.876897 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:24:23.876911 kubelet[2659]: E1108 00:24:23.876908 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:24:24.472992 containerd[1574]: time="2025-11-08T00:24:24.472910908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:24.473556 containerd[1574]: time="2025-11-08T00:24:24.473491216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:24:24.474741 containerd[1574]: time="2025-11-08T00:24:24.474704328Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:24.477124 containerd[1574]: time="2025-11-08T00:24:24.477090740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:24.477904 containerd[1574]: time="2025-11-08T00:24:24.477856077Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.37383979s" Nov 8 00:24:24.477904 containerd[1574]: time="2025-11-08T00:24:24.477893653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:24:24.481549 containerd[1574]: time="2025-11-08T00:24:24.481514604Z" level=info msg="CreateContainer within sandbox \"4ec48ee0ef2c5a2c64ecb3db918a179261d2ef2bdadcba53dfe6d1fed5f0e5c7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:24:24.498714 containerd[1574]: time="2025-11-08T00:24:24.498672849Z" level=info msg="CreateContainer within sandbox \"4ec48ee0ef2c5a2c64ecb3db918a179261d2ef2bdadcba53dfe6d1fed5f0e5c7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"44d96b474f194085baa0ad2c88ee224e16a12a43a37f2980fb493973aaa646c5\"" Nov 8 00:24:24.500457 containerd[1574]: time="2025-11-08T00:24:24.500207270Z" level=info msg="StartContainer for \"44d96b474f194085baa0ad2c88ee224e16a12a43a37f2980fb493973aaa646c5\"" Nov 8 00:24:24.569652 containerd[1574]: time="2025-11-08T00:24:24.569604006Z" level=info msg="StartContainer for \"44d96b474f194085baa0ad2c88ee224e16a12a43a37f2980fb493973aaa646c5\" returns successfully" Nov 8 00:24:24.650624 containerd[1574]: time="2025-11-08T00:24:24.650534863Z" level=info msg="shim disconnected" id=44d96b474f194085baa0ad2c88ee224e16a12a43a37f2980fb493973aaa646c5 namespace=k8s.io Nov 8 00:24:24.650624 containerd[1574]: time="2025-11-08T00:24:24.650598105Z" level=warning msg="cleaning up after shim disconnected" id=44d96b474f194085baa0ad2c88ee224e16a12a43a37f2980fb493973aaa646c5 namespace=k8s.io Nov 8 00:24:24.650624 containerd[1574]: time="2025-11-08T00:24:24.650607612Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:24.724743 kubelet[2659]: E1108 00:24:24.724564 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mbvz" podUID="9805d816-e7c8-479d-9360-d3b3efa64586" Nov 8 00:24:24.767635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44d96b474f194085baa0ad2c88ee224e16a12a43a37f2980fb493973aaa646c5-rootfs.mount: Deactivated successfully. Nov 8 00:24:24.790415 kubelet[2659]: I1108 00:24:24.790379 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:24:24.790825 kubelet[2659]: E1108 00:24:24.790797 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:24.790912 kubelet[2659]: E1108 00:24:24.790882 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:24.791619 containerd[1574]: time="2025-11-08T00:24:24.791583061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:24:26.730909 kubelet[2659]: E1108 00:24:26.730833 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mbvz" podUID="9805d816-e7c8-479d-9360-d3b3efa64586" Nov 8 00:24:27.474123 containerd[1574]: time="2025-11-08T00:24:27.474064627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:27.474772 containerd[1574]: time="2025-11-08T00:24:27.474739877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:24:27.475959 containerd[1574]: time="2025-11-08T00:24:27.475915723Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:27.478353 containerd[1574]: time="2025-11-08T00:24:27.478316724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:27.479101 containerd[1574]: time="2025-11-08T00:24:27.479071075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.687443274s" Nov 8 00:24:27.479101 containerd[1574]: time="2025-11-08T00:24:27.479101250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:24:27.483602 containerd[1574]: time="2025-11-08T00:24:27.483086137Z" level=info msg="CreateContainer within sandbox \"4ec48ee0ef2c5a2c64ecb3db918a179261d2ef2bdadcba53dfe6d1fed5f0e5c7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:24:27.502152 containerd[1574]: time="2025-11-08T00:24:27.502098792Z" level=info msg="CreateContainer within sandbox \"4ec48ee0ef2c5a2c64ecb3db918a179261d2ef2bdadcba53dfe6d1fed5f0e5c7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"263058e82e15dc7d4e6f9ef47c56b70c229a62e0857799ecffd62e019648e7e7\"" Nov 8 00:24:27.502679 containerd[1574]: time="2025-11-08T00:24:27.502597054Z" level=info msg="StartContainer for \"263058e82e15dc7d4e6f9ef47c56b70c229a62e0857799ecffd62e019648e7e7\"" Nov 8 00:24:27.572796 containerd[1574]: time="2025-11-08T00:24:27.572751665Z" level=info msg="StartContainer for \"263058e82e15dc7d4e6f9ef47c56b70c229a62e0857799ecffd62e019648e7e7\" returns successfully" Nov 8 00:24:27.797353 kubelet[2659]: E1108 00:24:27.797207 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:28.720109 kubelet[2659]: E1108 00:24:28.720055 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mbvz" podUID="9805d816-e7c8-479d-9360-d3b3efa64586" Nov 8 00:24:28.798855 kubelet[2659]: E1108 00:24:28.798794 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:28.976271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-263058e82e15dc7d4e6f9ef47c56b70c229a62e0857799ecffd62e019648e7e7-rootfs.mount: Deactivated successfully. Nov 8 00:24:28.979662 containerd[1574]: time="2025-11-08T00:24:28.979590935Z" level=info msg="shim disconnected" id=263058e82e15dc7d4e6f9ef47c56b70c229a62e0857799ecffd62e019648e7e7 namespace=k8s.io Nov 8 00:24:28.980089 containerd[1574]: time="2025-11-08T00:24:28.979663946Z" level=warning msg="cleaning up after shim disconnected" id=263058e82e15dc7d4e6f9ef47c56b70c229a62e0857799ecffd62e019648e7e7 namespace=k8s.io Nov 8 00:24:28.980089 containerd[1574]: time="2025-11-08T00:24:28.979676108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:24:29.020320 kubelet[2659]: I1108 00:24:29.020284 2659 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:24:29.107194 kubelet[2659]: I1108 00:24:29.107132 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgv9w\" (UniqueName: \"kubernetes.io/projected/10e98c86-5971-470b-a7fe-1df841b99600-kube-api-access-xgv9w\") pod \"coredns-668d6bf9bc-rzrnf\" (UID: \"10e98c86-5971-470b-a7fe-1df841b99600\") " pod="kube-system/coredns-668d6bf9bc-rzrnf" Nov 8 00:24:29.107194 kubelet[2659]: I1108 00:24:29.107185 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeea95e4-9382-44a4-8d49-f7b93764175d-whisker-ca-bundle\") pod \"whisker-886f776dc-cd7z8\" (UID: \"eeea95e4-9382-44a4-8d49-f7b93764175d\") " pod="calico-system/whisker-886f776dc-cd7z8" Nov 8 00:24:29.107194 kubelet[2659]: I1108 00:24:29.107210 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmp2\" (UniqueName: \"kubernetes.io/projected/e4d2644c-0b49-478b-8030-eea32781a579-kube-api-access-nvmp2\") pod \"goldmane-666569f655-z8t5h\" (UID: \"e4d2644c-0b49-478b-8030-eea32781a579\") " pod="calico-system/goldmane-666569f655-z8t5h" Nov 8 00:24:29.107531 kubelet[2659]: I1108 00:24:29.107230 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d2644c-0b49-478b-8030-eea32781a579-config\") pod \"goldmane-666569f655-z8t5h\" (UID: \"e4d2644c-0b49-478b-8030-eea32781a579\") " pod="calico-system/goldmane-666569f655-z8t5h" Nov 8 00:24:29.107531 kubelet[2659]: I1108 00:24:29.107254 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10e98c86-5971-470b-a7fe-1df841b99600-config-volume\") pod \"coredns-668d6bf9bc-rzrnf\" (UID: \"10e98c86-5971-470b-a7fe-1df841b99600\") " pod="kube-system/coredns-668d6bf9bc-rzrnf" Nov 8 00:24:29.107531 kubelet[2659]: I1108 00:24:29.107272 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d2644c-0b49-478b-8030-eea32781a579-goldmane-ca-bundle\") pod \"goldmane-666569f655-z8t5h\" (UID: \"e4d2644c-0b49-478b-8030-eea32781a579\") " pod="calico-system/goldmane-666569f655-z8t5h" Nov 8 00:24:29.107531 kubelet[2659]: I1108 00:24:29.107299 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eeea95e4-9382-44a4-8d49-f7b93764175d-whisker-backend-key-pair\") pod \"whisker-886f776dc-cd7z8\" (UID: \"eeea95e4-9382-44a4-8d49-f7b93764175d\") " pod="calico-system/whisker-886f776dc-cd7z8" Nov 8 00:24:29.107531 kubelet[2659]: I1108 00:24:29.107319 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9-config-volume\") pod \"coredns-668d6bf9bc-kckcx\" (UID: \"21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9\") " pod="kube-system/coredns-668d6bf9bc-kckcx" Nov 8 00:24:29.107790 kubelet[2659]: I1108 00:24:29.107353 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz982\" (UniqueName: \"kubernetes.io/projected/9d10b4e8-1a4b-40ad-b663-a53c60424a45-kube-api-access-lz982\") pod \"calico-apiserver-54f8bbf6f-npzst\" (UID: \"9d10b4e8-1a4b-40ad-b663-a53c60424a45\") " pod="calico-apiserver/calico-apiserver-54f8bbf6f-npzst" Nov 8 00:24:29.107790 kubelet[2659]: I1108 00:24:29.107377 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmh8v\" (UniqueName: \"kubernetes.io/projected/a83aa0bc-f007-4d1f-95cf-997e1c8ab851-kube-api-access-jmh8v\") pod \"calico-apiserver-54f8bbf6f-szk2c\" (UID: \"a83aa0bc-f007-4d1f-95cf-997e1c8ab851\") " pod="calico-apiserver/calico-apiserver-54f8bbf6f-szk2c" Nov 8 00:24:29.107790 kubelet[2659]: I1108 00:24:29.107400 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7df28\" (UniqueName: \"kubernetes.io/projected/258307e3-fc8b-44da-8b83-06fe3d2024fa-kube-api-access-7df28\") pod \"calico-apiserver-65fd777b6d-qk5xd\" (UID: \"258307e3-fc8b-44da-8b83-06fe3d2024fa\") " pod="calico-apiserver/calico-apiserver-65fd777b6d-qk5xd" Nov 8 00:24:29.107790 kubelet[2659]: I1108 00:24:29.107439 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a83aa0bc-f007-4d1f-95cf-997e1c8ab851-calico-apiserver-certs\") pod \"calico-apiserver-54f8bbf6f-szk2c\" (UID: \"a83aa0bc-f007-4d1f-95cf-997e1c8ab851\") " pod="calico-apiserver/calico-apiserver-54f8bbf6f-szk2c" Nov 8 00:24:29.107790 kubelet[2659]: I1108 00:24:29.107469 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/060a07da-b44f-4d4c-ae28-2a94dae48d16-tigera-ca-bundle\") pod \"calico-kube-controllers-999d4cc44-xrwzd\" (UID: \"060a07da-b44f-4d4c-ae28-2a94dae48d16\") " pod="calico-system/calico-kube-controllers-999d4cc44-xrwzd" Nov 8 00:24:29.108045 kubelet[2659]: I1108 00:24:29.107491 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndblm\" (UniqueName: \"kubernetes.io/projected/060a07da-b44f-4d4c-ae28-2a94dae48d16-kube-api-access-ndblm\") pod \"calico-kube-controllers-999d4cc44-xrwzd\" (UID: \"060a07da-b44f-4d4c-ae28-2a94dae48d16\") " pod="calico-system/calico-kube-controllers-999d4cc44-xrwzd" Nov 8 00:24:29.108045 kubelet[2659]: I1108 00:24:29.107513 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6pd5\" (UniqueName: \"kubernetes.io/projected/21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9-kube-api-access-p6pd5\") pod \"coredns-668d6bf9bc-kckcx\" (UID: \"21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9\") " pod="kube-system/coredns-668d6bf9bc-kckcx" Nov 8 00:24:29.108045 kubelet[2659]: I1108 00:24:29.107533 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxn69\" (UniqueName: \"kubernetes.io/projected/eeea95e4-9382-44a4-8d49-f7b93764175d-kube-api-access-xxn69\") pod \"whisker-886f776dc-cd7z8\" (UID: \"eeea95e4-9382-44a4-8d49-f7b93764175d\") " pod="calico-system/whisker-886f776dc-cd7z8" Nov 8 00:24:29.108045 kubelet[2659]: I1108 00:24:29.107556 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e4d2644c-0b49-478b-8030-eea32781a579-goldmane-key-pair\") pod \"goldmane-666569f655-z8t5h\" (UID: \"e4d2644c-0b49-478b-8030-eea32781a579\") " pod="calico-system/goldmane-666569f655-z8t5h" Nov 8 00:24:29.108045 kubelet[2659]: I1108 00:24:29.107580 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9d10b4e8-1a4b-40ad-b663-a53c60424a45-calico-apiserver-certs\") pod \"calico-apiserver-54f8bbf6f-npzst\" (UID: \"9d10b4e8-1a4b-40ad-b663-a53c60424a45\") " pod="calico-apiserver/calico-apiserver-54f8bbf6f-npzst" Nov 8 00:24:29.109049 kubelet[2659]: I1108 00:24:29.107605 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/258307e3-fc8b-44da-8b83-06fe3d2024fa-calico-apiserver-certs\") pod \"calico-apiserver-65fd777b6d-qk5xd\" (UID: \"258307e3-fc8b-44da-8b83-06fe3d2024fa\") " pod="calico-apiserver/calico-apiserver-65fd777b6d-qk5xd" Nov 8 00:24:29.353412 containerd[1574]: time="2025-11-08T00:24:29.353363236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-886f776dc-cd7z8,Uid:eeea95e4-9382-44a4-8d49-f7b93764175d,Namespace:calico-system,Attempt:0,}" Nov 8 00:24:29.370946 kubelet[2659]: E1108 00:24:29.370894 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:29.371381 containerd[1574]: time="2025-11-08T00:24:29.371335901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fd777b6d-qk5xd,Uid:258307e3-fc8b-44da-8b83-06fe3d2024fa,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:24:29.371896 containerd[1574]: time="2025-11-08T00:24:29.371809685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kckcx,Uid:21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9,Namespace:kube-system,Attempt:0,}" Nov 8 00:24:29.375750 containerd[1574]: time="2025-11-08T00:24:29.375704041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f8bbf6f-szk2c,Uid:a83aa0bc-f007-4d1f-95cf-997e1c8ab851,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:24:29.378867 containerd[1574]: time="2025-11-08T00:24:29.378825363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z8t5h,Uid:e4d2644c-0b49-478b-8030-eea32781a579,Namespace:calico-system,Attempt:0,}" Nov 8 00:24:29.382315 kubelet[2659]: E1108 00:24:29.382272 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:29.385335 containerd[1574]: time="2025-11-08T00:24:29.385290108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-999d4cc44-xrwzd,Uid:060a07da-b44f-4d4c-ae28-2a94dae48d16,Namespace:calico-system,Attempt:0,}" Nov 8 00:24:29.386741 containerd[1574]: time="2025-11-08T00:24:29.386711449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f8bbf6f-npzst,Uid:9d10b4e8-1a4b-40ad-b663-a53c60424a45,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:24:29.387252 containerd[1574]: time="2025-11-08T00:24:29.387220746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzrnf,Uid:10e98c86-5971-470b-a7fe-1df841b99600,Namespace:kube-system,Attempt:0,}" Nov 8 00:24:29.526452 containerd[1574]: time="2025-11-08T00:24:29.525051533Z" level=error msg="Failed to destroy network for sandbox \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.535258 containerd[1574]: time="2025-11-08T00:24:29.535196648Z" level=error msg="encountered an error cleaning up failed sandbox \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.535418 containerd[1574]: time="2025-11-08T00:24:29.535292391Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-886f776dc-cd7z8,Uid:eeea95e4-9382-44a4-8d49-f7b93764175d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.547703 kubelet[2659]: E1108 00:24:29.547636 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.547871 kubelet[2659]: E1108 00:24:29.547778 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-886f776dc-cd7z8" Nov 8 00:24:29.547871 kubelet[2659]: E1108 00:24:29.547808 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-886f776dc-cd7z8" Nov 8 00:24:29.548248 kubelet[2659]: E1108 00:24:29.547891 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-886f776dc-cd7z8_calico-system(eeea95e4-9382-44a4-8d49-f7b93764175d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-886f776dc-cd7z8_calico-system(eeea95e4-9382-44a4-8d49-f7b93764175d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-886f776dc-cd7z8" podUID="eeea95e4-9382-44a4-8d49-f7b93764175d" Nov 8 00:24:29.581229 containerd[1574]: time="2025-11-08T00:24:29.581166347Z" level=error msg="Failed to destroy network for sandbox \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.581649 containerd[1574]: time="2025-11-08T00:24:29.581616058Z" level=error msg="encountered an error cleaning up failed sandbox \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.581699 containerd[1574]: time="2025-11-08T00:24:29.581674353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f8bbf6f-szk2c,Uid:a83aa0bc-f007-4d1f-95cf-997e1c8ab851,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.581989 kubelet[2659]: E1108 00:24:29.581936 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.582061 kubelet[2659]: E1108 00:24:29.582014 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f8bbf6f-szk2c" Nov 8 00:24:29.582061 kubelet[2659]: E1108 00:24:29.582039 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f8bbf6f-szk2c" Nov 8 00:24:29.582107 kubelet[2659]: E1108 00:24:29.582081 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54f8bbf6f-szk2c_calico-apiserver(a83aa0bc-f007-4d1f-95cf-997e1c8ab851)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54f8bbf6f-szk2c_calico-apiserver(a83aa0bc-f007-4d1f-95cf-997e1c8ab851)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-szk2c" podUID="a83aa0bc-f007-4d1f-95cf-997e1c8ab851" Nov 8 00:24:29.586927 containerd[1574]: time="2025-11-08T00:24:29.586701499Z" level=error msg="Failed to destroy network for sandbox \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.587663 containerd[1574]: time="2025-11-08T00:24:29.587533317Z" level=error msg="encountered an error cleaning up failed sandbox \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.587663 containerd[1574]: time="2025-11-08T00:24:29.587600478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kckcx,Uid:21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.588155 kubelet[2659]: E1108 00:24:29.588074 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.588234 kubelet[2659]: E1108 00:24:29.588185 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kckcx" Nov 8 00:24:29.588275 kubelet[2659]: E1108 00:24:29.588243 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kckcx" Nov 8 00:24:29.588718 kubelet[2659]: E1108 00:24:29.588391 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kckcx_kube-system(21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kckcx_kube-system(21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kckcx" podUID="21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9" Nov 8 00:24:29.595022 containerd[1574]: time="2025-11-08T00:24:29.594988717Z" level=error msg="Failed to destroy network for sandbox \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.598685 containerd[1574]: time="2025-11-08T00:24:29.598658869Z" level=error msg="encountered an error cleaning up failed sandbox \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.598792 containerd[1574]: time="2025-11-08T00:24:29.598770680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fd777b6d-qk5xd,Uid:258307e3-fc8b-44da-8b83-06fe3d2024fa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.599090 kubelet[2659]: E1108 00:24:29.599049 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.599153 kubelet[2659]: E1108 00:24:29.599102 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65fd777b6d-qk5xd" Nov 8 00:24:29.599153 kubelet[2659]: E1108 00:24:29.599124 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65fd777b6d-qk5xd" Nov 8 00:24:29.599206 kubelet[2659]: E1108 00:24:29.599158 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65fd777b6d-qk5xd_calico-apiserver(258307e3-fc8b-44da-8b83-06fe3d2024fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65fd777b6d-qk5xd_calico-apiserver(258307e3-fc8b-44da-8b83-06fe3d2024fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65fd777b6d-qk5xd" podUID="258307e3-fc8b-44da-8b83-06fe3d2024fa" Nov 8 00:24:29.608679 containerd[1574]: time="2025-11-08T00:24:29.608580582Z" level=error msg="Failed to destroy network for sandbox \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.610033 containerd[1574]: time="2025-11-08T00:24:29.609813943Z" level=error msg="Failed to destroy network for sandbox \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.610241 containerd[1574]: time="2025-11-08T00:24:29.610188267Z" level=error msg="encountered an error cleaning up failed sandbox \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.610241 containerd[1574]: time="2025-11-08T00:24:29.610226266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f8bbf6f-npzst,Uid:9d10b4e8-1a4b-40ad-b663-a53c60424a45,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.610543 kubelet[2659]: E1108 00:24:29.610483 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.610629 kubelet[2659]: E1108 00:24:29.610564 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f8bbf6f-npzst" Nov 8 00:24:29.610629 kubelet[2659]: E1108 00:24:29.610587 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f8bbf6f-npzst" Nov 8 00:24:29.610693 kubelet[2659]: E1108 00:24:29.610638 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54f8bbf6f-npzst_calico-apiserver(9d10b4e8-1a4b-40ad-b663-a53c60424a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54f8bbf6f-npzst_calico-apiserver(9d10b4e8-1a4b-40ad-b663-a53c60424a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-npzst" podUID="9d10b4e8-1a4b-40ad-b663-a53c60424a45" Nov 8 00:24:29.612892 containerd[1574]: time="2025-11-08T00:24:29.612823614Z" level=error msg="Failed to destroy network for sandbox \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.613448 containerd[1574]: time="2025-11-08T00:24:29.613404371Z" level=error msg="encountered an error cleaning up failed sandbox \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.613570 containerd[1574]: time="2025-11-08T00:24:29.613546117Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z8t5h,Uid:e4d2644c-0b49-478b-8030-eea32781a579,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.613931 kubelet[2659]: E1108 00:24:29.613890 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.614084 kubelet[2659]: E1108 00:24:29.614061 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-z8t5h" Nov 8 00:24:29.614176 kubelet[2659]: E1108 00:24:29.614140 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-z8t5h" Nov 8 00:24:29.614343 kubelet[2659]: E1108 00:24:29.614207 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-z8t5h_calico-system(e4d2644c-0b49-478b-8030-eea32781a579)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-z8t5h_calico-system(e4d2644c-0b49-478b-8030-eea32781a579)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-z8t5h" podUID="e4d2644c-0b49-478b-8030-eea32781a579" Nov 8 00:24:29.616629 containerd[1574]: time="2025-11-08T00:24:29.616453333Z" level=error msg="Failed to destroy network for sandbox \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.617045 containerd[1574]: time="2025-11-08T00:24:29.616997654Z" level=error msg="encountered an error cleaning up failed sandbox \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.617163 containerd[1574]: time="2025-11-08T00:24:29.617054607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzrnf,Uid:10e98c86-5971-470b-a7fe-1df841b99600,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.617293 kubelet[2659]: E1108 00:24:29.617255 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.617366 kubelet[2659]: E1108 00:24:29.617313 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rzrnf" Nov 8 00:24:29.617366 kubelet[2659]: E1108 00:24:29.617334 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rzrnf" Nov 8 00:24:29.617433 kubelet[2659]: E1108 00:24:29.617375 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rzrnf_kube-system(10e98c86-5971-470b-a7fe-1df841b99600)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rzrnf_kube-system(10e98c86-5971-470b-a7fe-1df841b99600)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rzrnf" podUID="10e98c86-5971-470b-a7fe-1df841b99600" Nov 8 00:24:29.666650 containerd[1574]: time="2025-11-08T00:24:29.666574120Z" level=error msg="encountered an error cleaning up failed sandbox \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.666650 containerd[1574]: time="2025-11-08T00:24:29.666660085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-999d4cc44-xrwzd,Uid:060a07da-b44f-4d4c-ae28-2a94dae48d16,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.667003 kubelet[2659]: E1108 00:24:29.666942 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.667063 kubelet[2659]: E1108 00:24:29.667024 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-999d4cc44-xrwzd" Nov 8 00:24:29.667095 kubelet[2659]: E1108 00:24:29.667053 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-999d4cc44-xrwzd" Nov 8 00:24:29.667142 kubelet[2659]: E1108 00:24:29.667114 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-999d4cc44-xrwzd_calico-system(060a07da-b44f-4d4c-ae28-2a94dae48d16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-999d4cc44-xrwzd_calico-system(060a07da-b44f-4d4c-ae28-2a94dae48d16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-999d4cc44-xrwzd" podUID="060a07da-b44f-4d4c-ae28-2a94dae48d16" Nov 8 00:24:29.804211 kubelet[2659]: E1108 00:24:29.804178 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:29.807095 containerd[1574]: time="2025-11-08T00:24:29.807060190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:24:29.807823 kubelet[2659]: I1108 00:24:29.807799 2659 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Nov 8 00:24:29.810249 kubelet[2659]: I1108 00:24:29.809684 2659 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Nov 8 00:24:29.811481 kubelet[2659]: I1108 00:24:29.811173 2659 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Nov 8 00:24:29.812585 kubelet[2659]: I1108 00:24:29.812568 2659 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Nov 8 00:24:29.823043 containerd[1574]: time="2025-11-08T00:24:29.822827142Z" level=info msg="StopPodSandbox for \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\"" Nov 8 00:24:29.823365 containerd[1574]: time="2025-11-08T00:24:29.823164069Z" level=info msg="StopPodSandbox for \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\"" Nov 8 00:24:29.824590 containerd[1574]: time="2025-11-08T00:24:29.824106477Z" level=info msg="StopPodSandbox for \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\"" Nov 8 00:24:29.825656 containerd[1574]: time="2025-11-08T00:24:29.825579951Z" level=info msg="Ensure that sandbox 4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6 in task-service has been cleanup successfully" Nov 8 00:24:29.825975 containerd[1574]: time="2025-11-08T00:24:29.825587525Z" level=info msg="Ensure that sandbox f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838 in task-service has been cleanup successfully" Nov 8 00:24:29.826545 containerd[1574]: time="2025-11-08T00:24:29.825608543Z" level=info msg="Ensure that sandbox 50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1 in task-service has been cleanup successfully" Nov 8 00:24:29.827531 containerd[1574]: time="2025-11-08T00:24:29.827242647Z" level=info msg="StopPodSandbox for \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\"" Nov 8 00:24:29.827906 containerd[1574]: time="2025-11-08T00:24:29.827877931Z" level=info msg="Ensure that sandbox a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b in task-service has been cleanup successfully" Nov 8 00:24:29.841485 kubelet[2659]: I1108 00:24:29.840007 2659 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Nov 8 00:24:29.841601 containerd[1574]: time="2025-11-08T00:24:29.840924272Z" level=info msg="StopPodSandbox for \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\"" Nov 8 00:24:29.841601 containerd[1574]: time="2025-11-08T00:24:29.841317661Z" level=info msg="Ensure that sandbox 39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea in task-service has been cleanup successfully" Nov 8 00:24:29.843030 kubelet[2659]: I1108 00:24:29.842709 2659 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Nov 8 00:24:29.844321 containerd[1574]: time="2025-11-08T00:24:29.844254501Z" level=info msg="StopPodSandbox for \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\"" Nov 8 00:24:29.845628 containerd[1574]: time="2025-11-08T00:24:29.845592271Z" level=info msg="Ensure that sandbox a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078 in task-service has been cleanup successfully" Nov 8 00:24:29.845859 kubelet[2659]: I1108 00:24:29.845829 2659 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Nov 8 00:24:29.846974 containerd[1574]: time="2025-11-08T00:24:29.846920925Z" level=info msg="StopPodSandbox for \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\"" Nov 8 00:24:29.851870 kubelet[2659]: I1108 00:24:29.851821 2659 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Nov 8 00:24:29.855025 containerd[1574]: time="2025-11-08T00:24:29.854531725Z" level=info msg="StopPodSandbox for \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\"" Nov 8 00:24:29.855025 containerd[1574]: time="2025-11-08T00:24:29.854690150Z" level=info msg="Ensure that sandbox a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e in task-service has been cleanup successfully" Nov 8 00:24:29.858972 containerd[1574]: time="2025-11-08T00:24:29.858850695Z" level=info msg="Ensure that sandbox 3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d in task-service has been cleanup successfully" Nov 8 00:24:29.906559 containerd[1574]: time="2025-11-08T00:24:29.906483360Z" level=error msg="StopPodSandbox for \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\" failed" error="failed to destroy network for sandbox \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.906794 kubelet[2659]: E1108 00:24:29.906757 2659 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Nov 8 00:24:29.906880 kubelet[2659]: E1108 00:24:29.906823 2659 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b"} Nov 8 00:24:29.906963 kubelet[2659]: E1108 00:24:29.906897 2659 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"258307e3-fc8b-44da-8b83-06fe3d2024fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:24:29.906963 kubelet[2659]: E1108 00:24:29.906924 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"258307e3-fc8b-44da-8b83-06fe3d2024fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65fd777b6d-qk5xd" podUID="258307e3-fc8b-44da-8b83-06fe3d2024fa" Nov 8 00:24:29.908448 containerd[1574]: time="2025-11-08T00:24:29.907980768Z" level=error msg="StopPodSandbox for \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\" failed" error="failed to destroy network for sandbox \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.908497 kubelet[2659]: E1108 00:24:29.908145 2659 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Nov 8 00:24:29.908497 kubelet[2659]: E1108 00:24:29.908197 2659 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea"} Nov 8 00:24:29.908497 kubelet[2659]: E1108 00:24:29.908221 2659 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d10b4e8-1a4b-40ad-b663-a53c60424a45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:24:29.908497 kubelet[2659]: E1108 00:24:29.908241 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d10b4e8-1a4b-40ad-b663-a53c60424a45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-npzst" podUID="9d10b4e8-1a4b-40ad-b663-a53c60424a45" Nov 8 00:24:29.921576 containerd[1574]: time="2025-11-08T00:24:29.921521259Z" level=error msg="StopPodSandbox for \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\" failed" error="failed to destroy network for sandbox \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.922094 kubelet[2659]: E1108 00:24:29.921955 2659 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Nov 8 00:24:29.922094 kubelet[2659]: E1108 00:24:29.922008 2659 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078"} Nov 8 00:24:29.922094 kubelet[2659]: E1108 00:24:29.922041 2659 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:24:29.922094 kubelet[2659]: E1108 00:24:29.922065 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kckcx" podUID="21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9" Nov 8 00:24:29.925858 containerd[1574]: time="2025-11-08T00:24:29.925803954Z" level=error msg="StopPodSandbox for \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\" failed" error="failed to destroy network for sandbox \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.926170 kubelet[2659]: E1108 00:24:29.926030 2659 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Nov 8 00:24:29.926170 kubelet[2659]: E1108 00:24:29.926100 2659 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1"} Nov 8 00:24:29.926170 kubelet[2659]: E1108 00:24:29.926127 2659 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"060a07da-b44f-4d4c-ae28-2a94dae48d16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:24:29.926170 kubelet[2659]: E1108 00:24:29.926146 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"060a07da-b44f-4d4c-ae28-2a94dae48d16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-999d4cc44-xrwzd" podUID="060a07da-b44f-4d4c-ae28-2a94dae48d16" Nov 8 00:24:29.929110 containerd[1574]: time="2025-11-08T00:24:29.929077040Z" level=error msg="StopPodSandbox for \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\" failed" error="failed to destroy network for sandbox \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.929389 kubelet[2659]: E1108 00:24:29.929334 2659 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Nov 8 00:24:29.929551 kubelet[2659]: E1108 00:24:29.929510 2659 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838"} Nov 8 00:24:29.929629 kubelet[2659]: E1108 00:24:29.929579 2659 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10e98c86-5971-470b-a7fe-1df841b99600\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:24:29.929629 kubelet[2659]: E1108 00:24:29.929600 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10e98c86-5971-470b-a7fe-1df841b99600\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rzrnf" podUID="10e98c86-5971-470b-a7fe-1df841b99600" Nov 8 00:24:29.930314 containerd[1574]: time="2025-11-08T00:24:29.930252577Z" level=error msg="StopPodSandbox for \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\" failed" error="failed to destroy network for sandbox \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.930534 kubelet[2659]: E1108 00:24:29.930508 2659 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Nov 8 00:24:29.930581 kubelet[2659]: E1108 00:24:29.930540 2659 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e"} Nov 8 00:24:29.930615 kubelet[2659]: E1108 00:24:29.930580 2659 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eeea95e4-9382-44a4-8d49-f7b93764175d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:24:29.930666 kubelet[2659]: E1108 00:24:29.930607 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eeea95e4-9382-44a4-8d49-f7b93764175d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-886f776dc-cd7z8" podUID="eeea95e4-9382-44a4-8d49-f7b93764175d" Nov 8 00:24:29.938462 containerd[1574]: time="2025-11-08T00:24:29.938405003Z" level=error msg="StopPodSandbox for \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\" failed" error="failed to destroy network for sandbox \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.938660 kubelet[2659]: E1108 00:24:29.938625 2659 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Nov 8 00:24:29.938710 kubelet[2659]: E1108 00:24:29.938665 2659 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6"} Nov 8 00:24:29.938710 kubelet[2659]: E1108 00:24:29.938694 2659 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4d2644c-0b49-478b-8030-eea32781a579\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:24:29.938769 kubelet[2659]: E1108 00:24:29.938718 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4d2644c-0b49-478b-8030-eea32781a579\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-z8t5h" podUID="e4d2644c-0b49-478b-8030-eea32781a579" Nov 8 00:24:29.946635 containerd[1574]: time="2025-11-08T00:24:29.946573598Z" level=error msg="StopPodSandbox for \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\" failed" error="failed to destroy network for sandbox \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:29.946884 kubelet[2659]: E1108 00:24:29.946847 2659 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Nov 8 00:24:29.946933 kubelet[2659]: E1108 00:24:29.946896 2659 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d"} Nov 8 00:24:29.946964 kubelet[2659]: E1108 00:24:29.946928 2659 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a83aa0bc-f007-4d1f-95cf-997e1c8ab851\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:24:29.946964 kubelet[2659]: E1108 00:24:29.946955 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a83aa0bc-f007-4d1f-95cf-997e1c8ab851\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-szk2c" podUID="a83aa0bc-f007-4d1f-95cf-997e1c8ab851" Nov 8 00:24:30.728523 containerd[1574]: time="2025-11-08T00:24:30.728246046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5mbvz,Uid:9805d816-e7c8-479d-9360-d3b3efa64586,Namespace:calico-system,Attempt:0,}" Nov 8 00:24:30.836206 containerd[1574]: time="2025-11-08T00:24:30.836150433Z" level=error msg="Failed to destroy network for sandbox \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:30.837310 containerd[1574]: time="2025-11-08T00:24:30.837252773Z" level=error msg="encountered an error cleaning up failed sandbox \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:30.837565 containerd[1574]: time="2025-11-08T00:24:30.837524594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5mbvz,Uid:9805d816-e7c8-479d-9360-d3b3efa64586,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:30.837814 kubelet[2659]: E1108 00:24:30.837738 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:30.838337 kubelet[2659]: E1108 00:24:30.837841 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5mbvz" Nov 8 00:24:30.838337 kubelet[2659]: E1108 00:24:30.837867 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5mbvz" Nov 8 00:24:30.838337 kubelet[2659]: E1108 00:24:30.837923 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5mbvz_calico-system(9805d816-e7c8-479d-9360-d3b3efa64586)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5mbvz_calico-system(9805d816-e7c8-479d-9360-d3b3efa64586)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5mbvz" podUID="9805d816-e7c8-479d-9360-d3b3efa64586" Nov 8 00:24:30.843925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85-shm.mount: Deactivated successfully. Nov 8 00:24:30.856085 kubelet[2659]: I1108 00:24:30.855983 2659 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Nov 8 00:24:30.856803 containerd[1574]: time="2025-11-08T00:24:30.856744966Z" level=info msg="StopPodSandbox for \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\"" Nov 8 00:24:30.857470 containerd[1574]: time="2025-11-08T00:24:30.857386015Z" level=info msg="Ensure that sandbox 55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85 in task-service has been cleanup successfully" Nov 8 00:24:30.897712 containerd[1574]: time="2025-11-08T00:24:30.897625698Z" level=error msg="StopPodSandbox for \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\" failed" error="failed to destroy network for sandbox \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:24:30.898127 kubelet[2659]: E1108 00:24:30.898011 2659 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Nov 8 00:24:30.899131 kubelet[2659]: E1108 00:24:30.899093 2659 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85"} Nov 8 00:24:30.899198 kubelet[2659]: E1108 00:24:30.899153 2659 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9805d816-e7c8-479d-9360-d3b3efa64586\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:24:30.899198 kubelet[2659]: E1108 00:24:30.899184 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9805d816-e7c8-479d-9360-d3b3efa64586\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5mbvz" podUID="9805d816-e7c8-479d-9360-d3b3efa64586" Nov 8 00:24:34.536175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2358157801.mount: Deactivated successfully. Nov 8 00:24:35.313084 containerd[1574]: time="2025-11-08T00:24:35.312986726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:35.314032 containerd[1574]: time="2025-11-08T00:24:35.313961556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:24:35.315438 containerd[1574]: time="2025-11-08T00:24:35.315379995Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:35.318158 containerd[1574]: time="2025-11-08T00:24:35.318117962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:35.318880 containerd[1574]: time="2025-11-08T00:24:35.318844457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.511736722s" Nov 8 00:24:35.318929 containerd[1574]: time="2025-11-08T00:24:35.318891884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:24:35.328468 containerd[1574]: time="2025-11-08T00:24:35.328389959Z" level=info msg="CreateContainer within sandbox \"4ec48ee0ef2c5a2c64ecb3db918a179261d2ef2bdadcba53dfe6d1fed5f0e5c7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:24:35.352969 containerd[1574]: time="2025-11-08T00:24:35.352908148Z" level=info msg="CreateContainer within sandbox \"4ec48ee0ef2c5a2c64ecb3db918a179261d2ef2bdadcba53dfe6d1fed5f0e5c7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b895d48f7afc9d5cf0b07b27fe747cff3f1a3f0130b40533f9eb8f71762d0601\"" Nov 8 00:24:35.354001 containerd[1574]: time="2025-11-08T00:24:35.353936947Z" level=info msg="StartContainer for \"b895d48f7afc9d5cf0b07b27fe747cff3f1a3f0130b40533f9eb8f71762d0601\"" Nov 8 00:24:35.502649 containerd[1574]: time="2025-11-08T00:24:35.502565163Z" level=info msg="StartContainer for \"b895d48f7afc9d5cf0b07b27fe747cff3f1a3f0130b40533f9eb8f71762d0601\" returns successfully" Nov 8 00:24:35.562743 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:24:35.562933 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:24:35.666332 containerd[1574]: time="2025-11-08T00:24:35.666275076Z" level=info msg="StopPodSandbox for \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\"" Nov 8 00:24:35.879034 kubelet[2659]: E1108 00:24:35.878987 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:35.935090 containerd[1574]: 2025-11-08 00:24:35.814 [INFO][4073] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Nov 8 00:24:35.935090 containerd[1574]: 2025-11-08 00:24:35.814 [INFO][4073] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" iface="eth0" netns="/var/run/netns/cni-796e0904-aab7-5533-7e74-7648b13ddf32" Nov 8 00:24:35.935090 containerd[1574]: 2025-11-08 00:24:35.815 [INFO][4073] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" iface="eth0" netns="/var/run/netns/cni-796e0904-aab7-5533-7e74-7648b13ddf32" Nov 8 00:24:35.935090 containerd[1574]: 2025-11-08 00:24:35.816 [INFO][4073] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" iface="eth0" netns="/var/run/netns/cni-796e0904-aab7-5533-7e74-7648b13ddf32" Nov 8 00:24:35.935090 containerd[1574]: 2025-11-08 00:24:35.816 [INFO][4073] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Nov 8 00:24:35.935090 containerd[1574]: 2025-11-08 00:24:35.816 [INFO][4073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Nov 8 00:24:35.935090 containerd[1574]: 2025-11-08 00:24:35.890 [INFO][4084] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" HandleID="k8s-pod-network.a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Workload="localhost-k8s-whisker--886f776dc--cd7z8-eth0" Nov 8 00:24:35.935090 containerd[1574]: 2025-11-08 00:24:35.891 [INFO][4084] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:35.935090 containerd[1574]: 2025-11-08 00:24:35.891 [INFO][4084] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:35.935090 containerd[1574]: 2025-11-08 00:24:35.920 [WARNING][4084] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" HandleID="k8s-pod-network.a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Workload="localhost-k8s-whisker--886f776dc--cd7z8-eth0" Nov 8 00:24:35.935090 containerd[1574]: 2025-11-08 00:24:35.920 [INFO][4084] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" HandleID="k8s-pod-network.a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Workload="localhost-k8s-whisker--886f776dc--cd7z8-eth0" Nov 8 00:24:35.935090 containerd[1574]: 2025-11-08 00:24:35.922 [INFO][4084] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:35.935090 containerd[1574]: 2025-11-08 00:24:35.930 [INFO][4073] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Nov 8 00:24:35.936750 containerd[1574]: time="2025-11-08T00:24:35.935390319Z" level=info msg="TearDown network for sandbox \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\" successfully" Nov 8 00:24:35.936750 containerd[1574]: time="2025-11-08T00:24:35.935507262Z" level=info msg="StopPodSandbox for \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\" returns successfully" Nov 8 00:24:35.936801 kubelet[2659]: I1108 00:24:35.935264 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zs9pp" podStartSLOduration=1.506560422 podStartE2EDuration="15.93522683s" podCreationTimestamp="2025-11-08 00:24:20 +0000 UTC" firstStartedPulling="2025-11-08 00:24:20.890991875 +0000 UTC m=+20.270111869" lastFinishedPulling="2025-11-08 00:24:35.319658283 +0000 UTC m=+34.698778277" observedRunningTime="2025-11-08 00:24:35.934613862 +0000 UTC m=+35.313733856" watchObservedRunningTime="2025-11-08 00:24:35.93522683 +0000 UTC m=+35.314346824" Nov 8 00:24:35.941615 systemd[1]: run-netns-cni\x2d796e0904\x2daab7\x2d5533\x2d7e74\x2d7648b13ddf32.mount: Deactivated successfully. Nov 8 00:24:35.954758 kubelet[2659]: I1108 00:24:35.954708 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eeea95e4-9382-44a4-8d49-f7b93764175d-whisker-backend-key-pair\") pod \"eeea95e4-9382-44a4-8d49-f7b93764175d\" (UID: \"eeea95e4-9382-44a4-8d49-f7b93764175d\") " Nov 8 00:24:35.954758 kubelet[2659]: I1108 00:24:35.954757 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxn69\" (UniqueName: \"kubernetes.io/projected/eeea95e4-9382-44a4-8d49-f7b93764175d-kube-api-access-xxn69\") pod \"eeea95e4-9382-44a4-8d49-f7b93764175d\" (UID: \"eeea95e4-9382-44a4-8d49-f7b93764175d\") " Nov 8 00:24:35.954899 kubelet[2659]: I1108 00:24:35.954788 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeea95e4-9382-44a4-8d49-f7b93764175d-whisker-ca-bundle\") pod \"eeea95e4-9382-44a4-8d49-f7b93764175d\" (UID: \"eeea95e4-9382-44a4-8d49-f7b93764175d\") " Nov 8 00:24:35.963875 kubelet[2659]: I1108 00:24:35.963824 2659 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeea95e4-9382-44a4-8d49-f7b93764175d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "eeea95e4-9382-44a4-8d49-f7b93764175d" (UID: "eeea95e4-9382-44a4-8d49-f7b93764175d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:24:35.968813 kubelet[2659]: I1108 00:24:35.968677 2659 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeea95e4-9382-44a4-8d49-f7b93764175d-kube-api-access-xxn69" (OuterVolumeSpecName: "kube-api-access-xxn69") pod "eeea95e4-9382-44a4-8d49-f7b93764175d" (UID: "eeea95e4-9382-44a4-8d49-f7b93764175d"). InnerVolumeSpecName "kube-api-access-xxn69". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:24:35.968934 kubelet[2659]: I1108 00:24:35.968909 2659 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeea95e4-9382-44a4-8d49-f7b93764175d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "eeea95e4-9382-44a4-8d49-f7b93764175d" (UID: "eeea95e4-9382-44a4-8d49-f7b93764175d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:24:35.970857 systemd[1]: var-lib-kubelet-pods-eeea95e4\x2d9382\x2d44a4\x2d8d49\x2df7b93764175d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxxn69.mount: Deactivated successfully. Nov 8 00:24:35.971061 systemd[1]: var-lib-kubelet-pods-eeea95e4\x2d9382\x2d44a4\x2d8d49\x2df7b93764175d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:24:36.055560 kubelet[2659]: I1108 00:24:36.055489 2659 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eeea95e4-9382-44a4-8d49-f7b93764175d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:36.055560 kubelet[2659]: I1108 00:24:36.055545 2659 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxn69\" (UniqueName: \"kubernetes.io/projected/eeea95e4-9382-44a4-8d49-f7b93764175d-kube-api-access-xxn69\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:36.055560 kubelet[2659]: I1108 00:24:36.055560 2659 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeea95e4-9382-44a4-8d49-f7b93764175d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 8 00:24:36.961934 kubelet[2659]: I1108 00:24:36.961871 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/497820bc-a22f-4ed0-899b-b37a4c4036b5-whisker-ca-bundle\") pod \"whisker-85bb64496c-96z92\" (UID: \"497820bc-a22f-4ed0-899b-b37a4c4036b5\") " pod="calico-system/whisker-85bb64496c-96z92" Nov 8 00:24:36.961934 kubelet[2659]: I1108 00:24:36.961919 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85wsm\" (UniqueName: \"kubernetes.io/projected/497820bc-a22f-4ed0-899b-b37a4c4036b5-kube-api-access-85wsm\") pod \"whisker-85bb64496c-96z92\" (UID: \"497820bc-a22f-4ed0-899b-b37a4c4036b5\") " pod="calico-system/whisker-85bb64496c-96z92" Nov 8 00:24:36.962745 kubelet[2659]: I1108 00:24:36.961949 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/497820bc-a22f-4ed0-899b-b37a4c4036b5-whisker-backend-key-pair\") pod \"whisker-85bb64496c-96z92\" (UID: \"497820bc-a22f-4ed0-899b-b37a4c4036b5\") " pod="calico-system/whisker-85bb64496c-96z92" Nov 8 00:24:37.233869 containerd[1574]: time="2025-11-08T00:24:37.233732105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85bb64496c-96z92,Uid:497820bc-a22f-4ed0-899b-b37a4c4036b5,Namespace:calico-system,Attempt:0,}" Nov 8 00:24:37.357741 systemd-networkd[1245]: caliac8fabc5812: Link UP Nov 8 00:24:37.358378 systemd-networkd[1245]: caliac8fabc5812: Gained carrier Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.277 [INFO][4208] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.286 [INFO][4208] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--85bb64496c--96z92-eth0 whisker-85bb64496c- calico-system 497820bc-a22f-4ed0-899b-b37a4c4036b5 936 0 2025-11-08 00:24:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:85bb64496c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-85bb64496c-96z92 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliac8fabc5812 [] [] }} ContainerID="010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" Namespace="calico-system" Pod="whisker-85bb64496c-96z92" WorkloadEndpoint="localhost-k8s-whisker--85bb64496c--96z92-" Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.286 [INFO][4208] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" Namespace="calico-system" Pod="whisker-85bb64496c-96z92" WorkloadEndpoint="localhost-k8s-whisker--85bb64496c--96z92-eth0" Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.313 [INFO][4221] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" HandleID="k8s-pod-network.010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" Workload="localhost-k8s-whisker--85bb64496c--96z92-eth0" Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.314 [INFO][4221] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" HandleID="k8s-pod-network.010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" Workload="localhost-k8s-whisker--85bb64496c--96z92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001199f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-85bb64496c-96z92", "timestamp":"2025-11-08 00:24:37.313822718 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.314 [INFO][4221] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.314 [INFO][4221] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.314 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.320 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" host="localhost" Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.325 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.329 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.331 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.332 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.332 [INFO][4221] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" host="localhost" Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.333 [INFO][4221] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.337 [INFO][4221] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" host="localhost" Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.341 [INFO][4221] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" host="localhost" Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.341 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" host="localhost" Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.341 [INFO][4221] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:37.373125 containerd[1574]: 2025-11-08 00:24:37.341 [INFO][4221] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" HandleID="k8s-pod-network.010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" Workload="localhost-k8s-whisker--85bb64496c--96z92-eth0" Nov 8 00:24:37.374052 containerd[1574]: 2025-11-08 00:24:37.345 [INFO][4208] cni-plugin/k8s.go 418: Populated endpoint ContainerID="010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" Namespace="calico-system" Pod="whisker-85bb64496c-96z92" WorkloadEndpoint="localhost-k8s-whisker--85bb64496c--96z92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85bb64496c--96z92-eth0", GenerateName:"whisker-85bb64496c-", Namespace:"calico-system", SelfLink:"", UID:"497820bc-a22f-4ed0-899b-b37a4c4036b5", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85bb64496c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-85bb64496c-96z92", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliac8fabc5812", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:37.374052 containerd[1574]: 2025-11-08 00:24:37.345 [INFO][4208] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" Namespace="calico-system" Pod="whisker-85bb64496c-96z92" WorkloadEndpoint="localhost-k8s-whisker--85bb64496c--96z92-eth0" Nov 8 00:24:37.374052 containerd[1574]: 2025-11-08 00:24:37.345 [INFO][4208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac8fabc5812 ContainerID="010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" Namespace="calico-system" Pod="whisker-85bb64496c-96z92" WorkloadEndpoint="localhost-k8s-whisker--85bb64496c--96z92-eth0" Nov 8 00:24:37.374052 containerd[1574]: 2025-11-08 00:24:37.359 [INFO][4208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" Namespace="calico-system" Pod="whisker-85bb64496c-96z92" WorkloadEndpoint="localhost-k8s-whisker--85bb64496c--96z92-eth0" Nov 8 00:24:37.374052 containerd[1574]: 2025-11-08 00:24:37.359 [INFO][4208] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" Namespace="calico-system" Pod="whisker-85bb64496c-96z92" WorkloadEndpoint="localhost-k8s-whisker--85bb64496c--96z92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85bb64496c--96z92-eth0", GenerateName:"whisker-85bb64496c-", Namespace:"calico-system", SelfLink:"", UID:"497820bc-a22f-4ed0-899b-b37a4c4036b5", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85bb64496c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e", Pod:"whisker-85bb64496c-96z92", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliac8fabc5812", MAC:"6a:cb:82:c7:17:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:37.374052 containerd[1574]: 2025-11-08 00:24:37.368 [INFO][4208] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e" Namespace="calico-system" Pod="whisker-85bb64496c-96z92" WorkloadEndpoint="localhost-k8s-whisker--85bb64496c--96z92-eth0" Nov 8 00:24:37.409397 containerd[1574]: time="2025-11-08T00:24:37.409232915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:37.410307 containerd[1574]: time="2025-11-08T00:24:37.410041076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:37.410307 containerd[1574]: time="2025-11-08T00:24:37.410061754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:37.410307 containerd[1574]: time="2025-11-08T00:24:37.410240471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:37.451377 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:24:37.482589 containerd[1574]: time="2025-11-08T00:24:37.482550847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85bb64496c-96z92,Uid:497820bc-a22f-4ed0-899b-b37a4c4036b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"010d68368fbed8311b4c9cf9f1a70442f056bc92977627329164047b3e28b62e\"" Nov 8 00:24:37.486175 containerd[1574]: time="2025-11-08T00:24:37.486084939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:24:37.879835 kubelet[2659]: I1108 00:24:37.879800 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:24:37.880341 kubelet[2659]: E1108 00:24:37.880309 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:37.914224 containerd[1574]: time="2025-11-08T00:24:37.914177009Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:38.153638 containerd[1574]: time="2025-11-08T00:24:38.144257905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:24:38.153638 containerd[1574]: time="2025-11-08T00:24:38.144885165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:24:38.154169 kubelet[2659]: E1108 00:24:38.153816 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:24:38.154169 kubelet[2659]: E1108 00:24:38.153893 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:24:38.159562 kubelet[2659]: E1108 00:24:38.154102 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:168e7d9989574572b8f93af229b1409b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85wsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85bb64496c-96z92_calico-system(497820bc-a22f-4ed0-899b-b37a4c4036b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:38.159686 containerd[1574]: time="2025-11-08T00:24:38.156062574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:24:38.654672 systemd[1]: Started sshd@7-10.0.0.93:22-10.0.0.1:49918.service - OpenSSH per-connection server daemon (10.0.0.1:49918). Nov 8 00:24:38.693447 sshd[4350]: Accepted publickey for core from 10.0.0.1 port 49918 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:24:38.695733 sshd[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:38.701647 systemd-logind[1557]: New session 8 of user core. Nov 8 00:24:38.709811 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:24:38.723541 kubelet[2659]: I1108 00:24:38.723508 2659 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeea95e4-9382-44a4-8d49-f7b93764175d" path="/var/lib/kubelet/pods/eeea95e4-9382-44a4-8d49-f7b93764175d/volumes" Nov 8 00:24:38.853399 sshd[4350]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:38.857415 systemd[1]: sshd@7-10.0.0.93:22-10.0.0.1:49918.service: Deactivated successfully. Nov 8 00:24:38.859947 systemd-logind[1557]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:24:38.860063 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:24:38.861554 systemd-logind[1557]: Removed session 8. Nov 8 00:24:38.869610 systemd-networkd[1245]: caliac8fabc5812: Gained IPv6LL Nov 8 00:24:38.910525 containerd[1574]: time="2025-11-08T00:24:38.910393841Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:39.090238 containerd[1574]: time="2025-11-08T00:24:39.090133602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:24:39.090238 containerd[1574]: time="2025-11-08T00:24:39.090190306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:24:39.090598 kubelet[2659]: E1108 00:24:39.090517 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:24:39.090658 kubelet[2659]: E1108 00:24:39.090602 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:24:39.090784 kubelet[2659]: E1108 00:24:39.090728 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85wsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85bb64496c-96z92_calico-system(497820bc-a22f-4ed0-899b-b37a4c4036b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:39.091959 kubelet[2659]: E1108 00:24:39.091904 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85bb64496c-96z92" podUID="497820bc-a22f-4ed0-899b-b37a4c4036b5" Nov 8 00:24:39.886713 kubelet[2659]: E1108 00:24:39.886653 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85bb64496c-96z92" podUID="497820bc-a22f-4ed0-899b-b37a4c4036b5" Nov 8 00:24:40.722393 containerd[1574]: time="2025-11-08T00:24:40.722287741Z" level=info msg="StopPodSandbox for \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\"" Nov 8 00:24:40.723011 containerd[1574]: time="2025-11-08T00:24:40.722627897Z" level=info msg="StopPodSandbox for \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\"" Nov 8 00:24:40.825160 containerd[1574]: 2025-11-08 00:24:40.782 [INFO][4441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Nov 8 00:24:40.825160 containerd[1574]: 2025-11-08 00:24:40.783 [INFO][4441] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" iface="eth0" netns="/var/run/netns/cni-7cfc5669-9713-4737-df18-19ca9587fcc8" Nov 8 00:24:40.825160 containerd[1574]: 2025-11-08 00:24:40.783 [INFO][4441] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" iface="eth0" netns="/var/run/netns/cni-7cfc5669-9713-4737-df18-19ca9587fcc8" Nov 8 00:24:40.825160 containerd[1574]: 2025-11-08 00:24:40.783 [INFO][4441] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" iface="eth0" netns="/var/run/netns/cni-7cfc5669-9713-4737-df18-19ca9587fcc8" Nov 8 00:24:40.825160 containerd[1574]: 2025-11-08 00:24:40.783 [INFO][4441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Nov 8 00:24:40.825160 containerd[1574]: 2025-11-08 00:24:40.784 [INFO][4441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Nov 8 00:24:40.825160 containerd[1574]: 2025-11-08 00:24:40.812 [INFO][4458] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" HandleID="k8s-pod-network.50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Workload="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:24:40.825160 containerd[1574]: 2025-11-08 00:24:40.812 [INFO][4458] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:40.825160 containerd[1574]: 2025-11-08 00:24:40.812 [INFO][4458] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:40.825160 containerd[1574]: 2025-11-08 00:24:40.817 [WARNING][4458] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" HandleID="k8s-pod-network.50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Workload="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:24:40.825160 containerd[1574]: 2025-11-08 00:24:40.817 [INFO][4458] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" HandleID="k8s-pod-network.50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Workload="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:24:40.825160 containerd[1574]: 2025-11-08 00:24:40.819 [INFO][4458] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:40.825160 containerd[1574]: 2025-11-08 00:24:40.822 [INFO][4441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Nov 8 00:24:40.828619 containerd[1574]: time="2025-11-08T00:24:40.825591275Z" level=info msg="TearDown network for sandbox \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\" successfully" Nov 8 00:24:40.828619 containerd[1574]: time="2025-11-08T00:24:40.825637349Z" level=info msg="StopPodSandbox for \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\" returns successfully" Nov 8 00:24:40.831264 containerd[1574]: time="2025-11-08T00:24:40.829542271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-999d4cc44-xrwzd,Uid:060a07da-b44f-4d4c-ae28-2a94dae48d16,Namespace:calico-system,Attempt:1,}" Nov 8 00:24:40.830479 systemd[1]: run-netns-cni\x2d7cfc5669\x2d9713\x2d4737\x2ddf18\x2d19ca9587fcc8.mount: Deactivated successfully. Nov 8 00:24:40.838390 containerd[1574]: 2025-11-08 00:24:40.785 [INFO][4446] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Nov 8 00:24:40.838390 containerd[1574]: 2025-11-08 00:24:40.785 [INFO][4446] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" iface="eth0" netns="/var/run/netns/cni-45eacaef-33f8-8050-36a7-5a71013ac7bf" Nov 8 00:24:40.838390 containerd[1574]: 2025-11-08 00:24:40.785 [INFO][4446] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" iface="eth0" netns="/var/run/netns/cni-45eacaef-33f8-8050-36a7-5a71013ac7bf" Nov 8 00:24:40.838390 containerd[1574]: 2025-11-08 00:24:40.785 [INFO][4446] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" iface="eth0" netns="/var/run/netns/cni-45eacaef-33f8-8050-36a7-5a71013ac7bf" Nov 8 00:24:40.838390 containerd[1574]: 2025-11-08 00:24:40.785 [INFO][4446] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Nov 8 00:24:40.838390 containerd[1574]: 2025-11-08 00:24:40.785 [INFO][4446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Nov 8 00:24:40.838390 containerd[1574]: 2025-11-08 00:24:40.819 [INFO][4460] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" HandleID="k8s-pod-network.4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Workload="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:24:40.838390 containerd[1574]: 2025-11-08 00:24:40.819 [INFO][4460] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:40.838390 containerd[1574]: 2025-11-08 00:24:40.820 [INFO][4460] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:40.838390 containerd[1574]: 2025-11-08 00:24:40.829 [WARNING][4460] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" HandleID="k8s-pod-network.4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Workload="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:24:40.838390 containerd[1574]: 2025-11-08 00:24:40.831 [INFO][4460] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" HandleID="k8s-pod-network.4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Workload="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:24:40.838390 containerd[1574]: 2025-11-08 00:24:40.832 [INFO][4460] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:40.838390 containerd[1574]: 2025-11-08 00:24:40.835 [INFO][4446] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Nov 8 00:24:40.841480 containerd[1574]: time="2025-11-08T00:24:40.840545205Z" level=info msg="TearDown network for sandbox \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\" successfully" Nov 8 00:24:40.841480 containerd[1574]: time="2025-11-08T00:24:40.840579829Z" level=info msg="StopPodSandbox for \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\" returns successfully" Nov 8 00:24:40.841984 systemd[1]: run-netns-cni\x2d45eacaef\x2d33f8\x2d8050\x2d36a7\x2d5a71013ac7bf.mount: Deactivated successfully. Nov 8 00:24:40.842068 containerd[1574]: time="2025-11-08T00:24:40.841982539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z8t5h,Uid:e4d2644c-0b49-478b-8030-eea32781a579,Namespace:calico-system,Attempt:1,}" Nov 8 00:24:40.964502 systemd-networkd[1245]: cali73fc3435791: Link UP Nov 8 00:24:40.964827 systemd-networkd[1245]: cali73fc3435791: Gained carrier Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.885 [INFO][4475] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.896 [INFO][4475] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0 calico-kube-controllers-999d4cc44- calico-system 060a07da-b44f-4d4c-ae28-2a94dae48d16 1011 0 2025-11-08 00:24:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:999d4cc44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-999d4cc44-xrwzd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali73fc3435791 [] [] }} ContainerID="4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" Namespace="calico-system" Pod="calico-kube-controllers-999d4cc44-xrwzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-" Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.896 [INFO][4475] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" Namespace="calico-system" Pod="calico-kube-controllers-999d4cc44-xrwzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.928 [INFO][4502] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" HandleID="k8s-pod-network.4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" Workload="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.929 [INFO][4502] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" HandleID="k8s-pod-network.4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" Workload="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001396e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-999d4cc44-xrwzd", "timestamp":"2025-11-08 00:24:40.928816064 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.929 [INFO][4502] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.929 [INFO][4502] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.929 [INFO][4502] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.935 [INFO][4502] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" host="localhost" Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.940 [INFO][4502] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.944 [INFO][4502] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.945 [INFO][4502] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.947 [INFO][4502] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.947 [INFO][4502] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" host="localhost" Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.948 [INFO][4502] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475 Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.951 [INFO][4502] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" host="localhost" Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.957 [INFO][4502] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" host="localhost" Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.957 [INFO][4502] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" host="localhost" Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.957 [INFO][4502] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:40.975532 containerd[1574]: 2025-11-08 00:24:40.957 [INFO][4502] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" HandleID="k8s-pod-network.4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" Workload="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:24:40.976083 containerd[1574]: 2025-11-08 00:24:40.960 [INFO][4475] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" Namespace="calico-system" Pod="calico-kube-controllers-999d4cc44-xrwzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0", GenerateName:"calico-kube-controllers-999d4cc44-", Namespace:"calico-system", SelfLink:"", UID:"060a07da-b44f-4d4c-ae28-2a94dae48d16", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"999d4cc44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-999d4cc44-xrwzd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73fc3435791", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:40.976083 containerd[1574]: 2025-11-08 00:24:40.960 [INFO][4475] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" Namespace="calico-system" Pod="calico-kube-controllers-999d4cc44-xrwzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:24:40.976083 containerd[1574]: 2025-11-08 00:24:40.960 [INFO][4475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73fc3435791 ContainerID="4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" Namespace="calico-system" Pod="calico-kube-controllers-999d4cc44-xrwzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:24:40.976083 containerd[1574]: 2025-11-08 00:24:40.963 [INFO][4475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" Namespace="calico-system" Pod="calico-kube-controllers-999d4cc44-xrwzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:24:40.976083 containerd[1574]: 2025-11-08 00:24:40.963 [INFO][4475] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" Namespace="calico-system" Pod="calico-kube-controllers-999d4cc44-xrwzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0", GenerateName:"calico-kube-controllers-999d4cc44-", Namespace:"calico-system", SelfLink:"", UID:"060a07da-b44f-4d4c-ae28-2a94dae48d16", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"999d4cc44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475", Pod:"calico-kube-controllers-999d4cc44-xrwzd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73fc3435791", MAC:"9a:71:27:af:90:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:40.976083 containerd[1574]: 2025-11-08 00:24:40.972 [INFO][4475] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475" Namespace="calico-system" Pod="calico-kube-controllers-999d4cc44-xrwzd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:24:40.993950 containerd[1574]: time="2025-11-08T00:24:40.993862734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:40.994095 containerd[1574]: time="2025-11-08T00:24:40.993982163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:40.994095 containerd[1574]: time="2025-11-08T00:24:40.994051261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:40.994262 containerd[1574]: time="2025-11-08T00:24:40.994206025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:41.022840 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:24:41.057415 containerd[1574]: time="2025-11-08T00:24:41.057349054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-999d4cc44-xrwzd,Uid:060a07da-b44f-4d4c-ae28-2a94dae48d16,Namespace:calico-system,Attempt:1,} returns sandbox id \"4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475\"" Nov 8 00:24:41.059416 containerd[1574]: time="2025-11-08T00:24:41.059379964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:24:41.073960 systemd-networkd[1245]: cali04d21bd8d0f: Link UP Nov 8 00:24:41.074231 systemd-networkd[1245]: cali04d21bd8d0f: Gained carrier Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:40.896 [INFO][4485] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:40.907 [INFO][4485] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--z8t5h-eth0 goldmane-666569f655- calico-system e4d2644c-0b49-478b-8030-eea32781a579 1012 0 2025-11-08 00:24:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-z8t5h eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali04d21bd8d0f [] [] }} ContainerID="a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" Namespace="calico-system" Pod="goldmane-666569f655-z8t5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8t5h-" Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:40.907 [INFO][4485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" Namespace="calico-system" Pod="goldmane-666569f655-z8t5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:40.942 [INFO][4509] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" HandleID="k8s-pod-network.a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" Workload="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:40.942 [INFO][4509] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" HandleID="k8s-pod-network.a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" Workload="localhost-k8s-goldmane--666569f655--z8t5h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-z8t5h", "timestamp":"2025-11-08 00:24:40.942193835 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:40.942 [INFO][4509] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:40.957 [INFO][4509] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:40.957 [INFO][4509] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:41.037 [INFO][4509] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" host="localhost" Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:41.044 [INFO][4509] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:41.051 [INFO][4509] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:41.054 [INFO][4509] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:41.056 [INFO][4509] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:41.056 [INFO][4509] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" host="localhost" Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:41.057 [INFO][4509] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582 Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:41.062 [INFO][4509] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" host="localhost" Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:41.067 [INFO][4509] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" host="localhost" Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:41.067 [INFO][4509] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" host="localhost" Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:41.067 [INFO][4509] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:41.089532 containerd[1574]: 2025-11-08 00:24:41.068 [INFO][4509] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" HandleID="k8s-pod-network.a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" Workload="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:24:41.090145 containerd[1574]: 2025-11-08 00:24:41.071 [INFO][4485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" Namespace="calico-system" Pod="goldmane-666569f655-z8t5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8t5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--z8t5h-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e4d2644c-0b49-478b-8030-eea32781a579", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-z8t5h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali04d21bd8d0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:41.090145 containerd[1574]: 2025-11-08 00:24:41.071 [INFO][4485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" Namespace="calico-system" Pod="goldmane-666569f655-z8t5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:24:41.090145 containerd[1574]: 2025-11-08 00:24:41.071 [INFO][4485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04d21bd8d0f ContainerID="a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" Namespace="calico-system" Pod="goldmane-666569f655-z8t5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:24:41.090145 containerd[1574]: 2025-11-08 00:24:41.073 [INFO][4485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" Namespace="calico-system" Pod="goldmane-666569f655-z8t5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:24:41.090145 containerd[1574]: 2025-11-08 00:24:41.074 [INFO][4485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" Namespace="calico-system" Pod="goldmane-666569f655-z8t5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8t5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--z8t5h-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e4d2644c-0b49-478b-8030-eea32781a579", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582", Pod:"goldmane-666569f655-z8t5h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali04d21bd8d0f", MAC:"66:7e:46:86:4d:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:41.090145 containerd[1574]: 2025-11-08 00:24:41.084 [INFO][4485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582" Namespace="calico-system" Pod="goldmane-666569f655-z8t5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:24:41.113267 containerd[1574]: time="2025-11-08T00:24:41.113040277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:41.113267 containerd[1574]: time="2025-11-08T00:24:41.113124161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:41.113267 containerd[1574]: time="2025-11-08T00:24:41.113138227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:41.113564 containerd[1574]: time="2025-11-08T00:24:41.113315713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:41.146368 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:24:41.174235 containerd[1574]: time="2025-11-08T00:24:41.174146910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z8t5h,Uid:e4d2644c-0b49-478b-8030-eea32781a579,Namespace:calico-system,Attempt:1,} returns sandbox id \"a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582\"" Nov 8 00:24:41.434620 containerd[1574]: time="2025-11-08T00:24:41.434555851Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:41.451567 containerd[1574]: time="2025-11-08T00:24:41.451491093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:24:41.451784 containerd[1574]: time="2025-11-08T00:24:41.451555412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:24:41.451989 kubelet[2659]: E1108 00:24:41.451922 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:24:41.452480 kubelet[2659]: E1108 00:24:41.452001 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:24:41.452480 kubelet[2659]: E1108 00:24:41.452322 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndblm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-999d4cc44-xrwzd_calico-system(060a07da-b44f-4d4c-ae28-2a94dae48d16): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:41.452679 containerd[1574]: time="2025-11-08T00:24:41.452460058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:24:41.453809 kubelet[2659]: E1108 00:24:41.453762 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-999d4cc44-xrwzd" podUID="060a07da-b44f-4d4c-ae28-2a94dae48d16" Nov 8 00:24:41.811347 containerd[1574]: time="2025-11-08T00:24:41.811151863Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:41.812724 containerd[1574]: time="2025-11-08T00:24:41.812684245Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:24:41.812795 containerd[1574]: time="2025-11-08T00:24:41.812732473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:24:41.812976 kubelet[2659]: E1108 00:24:41.812925 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:24:41.813054 kubelet[2659]: E1108 00:24:41.812984 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:24:41.813178 kubelet[2659]: E1108 00:24:41.813126 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvmp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z8t5h_calico-system(e4d2644c-0b49-478b-8030-eea32781a579): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:41.814382 kubelet[2659]: E1108 00:24:41.814333 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t5h" podUID="e4d2644c-0b49-478b-8030-eea32781a579" Nov 8 00:24:41.892887 kubelet[2659]: E1108 00:24:41.892708 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-999d4cc44-xrwzd" podUID="060a07da-b44f-4d4c-ae28-2a94dae48d16" Nov 8 00:24:41.894311 kubelet[2659]: E1108 00:24:41.894276 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t5h" podUID="e4d2644c-0b49-478b-8030-eea32781a579" Nov 8 00:24:42.261602 systemd-networkd[1245]: cali04d21bd8d0f: Gained IPv6LL Nov 8 00:24:42.709779 systemd-networkd[1245]: cali73fc3435791: Gained IPv6LL Nov 8 00:24:42.721829 containerd[1574]: time="2025-11-08T00:24:42.721532068Z" level=info msg="StopPodSandbox for \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\"" Nov 8 00:24:42.721829 containerd[1574]: time="2025-11-08T00:24:42.721620923Z" level=info msg="StopPodSandbox for \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\"" Nov 8 00:24:42.813183 containerd[1574]: 2025-11-08 00:24:42.769 [INFO][4689] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Nov 8 00:24:42.813183 containerd[1574]: 2025-11-08 00:24:42.769 [INFO][4689] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" iface="eth0" netns="/var/run/netns/cni-a8bdf537-7f4e-afd3-f4bc-027bd556e7be" Nov 8 00:24:42.813183 containerd[1574]: 2025-11-08 00:24:42.769 [INFO][4689] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" iface="eth0" netns="/var/run/netns/cni-a8bdf537-7f4e-afd3-f4bc-027bd556e7be" Nov 8 00:24:42.813183 containerd[1574]: 2025-11-08 00:24:42.772 [INFO][4689] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" iface="eth0" netns="/var/run/netns/cni-a8bdf537-7f4e-afd3-f4bc-027bd556e7be" Nov 8 00:24:42.813183 containerd[1574]: 2025-11-08 00:24:42.772 [INFO][4689] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Nov 8 00:24:42.813183 containerd[1574]: 2025-11-08 00:24:42.772 [INFO][4689] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Nov 8 00:24:42.813183 containerd[1574]: 2025-11-08 00:24:42.801 [INFO][4703] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" HandleID="k8s-pod-network.a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Workload="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:24:42.813183 containerd[1574]: 2025-11-08 00:24:42.801 [INFO][4703] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:42.813183 containerd[1574]: 2025-11-08 00:24:42.801 [INFO][4703] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:42.813183 containerd[1574]: 2025-11-08 00:24:42.806 [WARNING][4703] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" HandleID="k8s-pod-network.a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Workload="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:24:42.813183 containerd[1574]: 2025-11-08 00:24:42.806 [INFO][4703] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" HandleID="k8s-pod-network.a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Workload="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:24:42.813183 containerd[1574]: 2025-11-08 00:24:42.808 [INFO][4703] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:42.813183 containerd[1574]: 2025-11-08 00:24:42.810 [INFO][4689] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Nov 8 00:24:42.814185 containerd[1574]: time="2025-11-08T00:24:42.814156251Z" level=info msg="TearDown network for sandbox \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\" successfully" Nov 8 00:24:42.814253 containerd[1574]: time="2025-11-08T00:24:42.814239785Z" level=info msg="StopPodSandbox for \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\" returns successfully" Nov 8 00:24:42.814942 kubelet[2659]: E1108 00:24:42.814908 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:42.816475 containerd[1574]: time="2025-11-08T00:24:42.816138866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kckcx,Uid:21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9,Namespace:kube-system,Attempt:1,}" Nov 8 00:24:42.818064 systemd[1]: run-netns-cni\x2da8bdf537\x2d7f4e\x2dafd3\x2df4bc\x2d027bd556e7be.mount: Deactivated successfully. Nov 8 00:24:42.829636 containerd[1574]: 2025-11-08 00:24:42.780 [INFO][4688] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Nov 8 00:24:42.829636 containerd[1574]: 2025-11-08 00:24:42.782 [INFO][4688] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" iface="eth0" netns="/var/run/netns/cni-d7fd3ce8-ec06-feed-b606-8e11648fc981" Nov 8 00:24:42.829636 containerd[1574]: 2025-11-08 00:24:42.782 [INFO][4688] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" iface="eth0" netns="/var/run/netns/cni-d7fd3ce8-ec06-feed-b606-8e11648fc981" Nov 8 00:24:42.829636 containerd[1574]: 2025-11-08 00:24:42.783 [INFO][4688] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" iface="eth0" netns="/var/run/netns/cni-d7fd3ce8-ec06-feed-b606-8e11648fc981" Nov 8 00:24:42.829636 containerd[1574]: 2025-11-08 00:24:42.783 [INFO][4688] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Nov 8 00:24:42.829636 containerd[1574]: 2025-11-08 00:24:42.783 [INFO][4688] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Nov 8 00:24:42.829636 containerd[1574]: 2025-11-08 00:24:42.813 [INFO][4710] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" HandleID="k8s-pod-network.39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:24:42.829636 containerd[1574]: 2025-11-08 00:24:42.813 [INFO][4710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:42.829636 containerd[1574]: 2025-11-08 00:24:42.813 [INFO][4710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:42.829636 containerd[1574]: 2025-11-08 00:24:42.821 [WARNING][4710] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" HandleID="k8s-pod-network.39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:24:42.829636 containerd[1574]: 2025-11-08 00:24:42.822 [INFO][4710] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" HandleID="k8s-pod-network.39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:24:42.829636 containerd[1574]: 2025-11-08 00:24:42.823 [INFO][4710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:42.829636 containerd[1574]: 2025-11-08 00:24:42.826 [INFO][4688] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Nov 8 00:24:42.830201 containerd[1574]: time="2025-11-08T00:24:42.830039089Z" level=info msg="TearDown network for sandbox \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\" successfully" Nov 8 00:24:42.830201 containerd[1574]: time="2025-11-08T00:24:42.830068865Z" level=info msg="StopPodSandbox for \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\" returns successfully" Nov 8 00:24:42.830881 containerd[1574]: time="2025-11-08T00:24:42.830848201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f8bbf6f-npzst,Uid:9d10b4e8-1a4b-40ad-b663-a53c60424a45,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:24:42.834014 systemd[1]: run-netns-cni\x2dd7fd3ce8\x2dec06\x2dfeed\x2db606\x2d8e11648fc981.mount: Deactivated successfully. Nov 8 00:24:42.898989 kubelet[2659]: E1108 00:24:42.898911 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t5h" podUID="e4d2644c-0b49-478b-8030-eea32781a579" Nov 8 00:24:42.906815 kubelet[2659]: E1108 00:24:42.900987 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-999d4cc44-xrwzd" podUID="060a07da-b44f-4d4c-ae28-2a94dae48d16" Nov 8 00:24:42.954278 systemd-networkd[1245]: cali29e1f61b725: Link UP Nov 8 00:24:42.954725 systemd-networkd[1245]: cali29e1f61b725: Gained carrier Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.862 [INFO][4720] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.875 [INFO][4720] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--kckcx-eth0 coredns-668d6bf9bc- kube-system 21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9 1046 0 2025-11-08 00:24:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-kckcx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali29e1f61b725 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" Namespace="kube-system" Pod="coredns-668d6bf9bc-kckcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kckcx-" Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.875 [INFO][4720] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" Namespace="kube-system" Pod="coredns-668d6bf9bc-kckcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.914 [INFO][4744] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" HandleID="k8s-pod-network.a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" Workload="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.915 [INFO][4744] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" HandleID="k8s-pod-network.a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" Workload="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7290), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-kckcx", "timestamp":"2025-11-08 00:24:42.914660711 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.915 [INFO][4744] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.915 [INFO][4744] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.915 [INFO][4744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.925 [INFO][4744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" host="localhost" Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.930 [INFO][4744] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.933 [INFO][4744] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.935 [INFO][4744] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.937 [INFO][4744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.937 [INFO][4744] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" host="localhost" Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.939 [INFO][4744] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003 Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.943 [INFO][4744] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" host="localhost" Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.948 [INFO][4744] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" host="localhost" Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.948 [INFO][4744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" host="localhost" Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.948 [INFO][4744] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:42.970211 containerd[1574]: 2025-11-08 00:24:42.948 [INFO][4744] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" HandleID="k8s-pod-network.a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" Workload="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:24:42.970865 containerd[1574]: 2025-11-08 00:24:42.951 [INFO][4720] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" Namespace="kube-system" Pod="coredns-668d6bf9bc-kckcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kckcx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-kckcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29e1f61b725", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:42.970865 containerd[1574]: 2025-11-08 00:24:42.951 [INFO][4720] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" Namespace="kube-system" Pod="coredns-668d6bf9bc-kckcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:24:42.970865 containerd[1574]: 2025-11-08 00:24:42.951 [INFO][4720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29e1f61b725 ContainerID="a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" Namespace="kube-system" Pod="coredns-668d6bf9bc-kckcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:24:42.970865 containerd[1574]: 2025-11-08 00:24:42.956 [INFO][4720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" Namespace="kube-system" Pod="coredns-668d6bf9bc-kckcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:24:42.970865 containerd[1574]: 2025-11-08 00:24:42.957 [INFO][4720] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" Namespace="kube-system" Pod="coredns-668d6bf9bc-kckcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kckcx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003", Pod:"coredns-668d6bf9bc-kckcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29e1f61b725", MAC:"7e:f1:1c:37:34:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:42.970865 containerd[1574]: 2025-11-08 00:24:42.967 [INFO][4720] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003" Namespace="kube-system" Pod="coredns-668d6bf9bc-kckcx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:24:42.987205 containerd[1574]: time="2025-11-08T00:24:42.987112537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:42.987205 containerd[1574]: time="2025-11-08T00:24:42.987163782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:42.987205 containerd[1574]: time="2025-11-08T00:24:42.987178519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:42.987417 containerd[1574]: time="2025-11-08T00:24:42.987267643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:43.018476 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:24:43.048133 containerd[1574]: time="2025-11-08T00:24:43.048065072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kckcx,Uid:21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9,Namespace:kube-system,Attempt:1,} returns sandbox id \"a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003\"" Nov 8 00:24:43.049335 kubelet[2659]: E1108 00:24:43.049307 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:43.052327 containerd[1574]: time="2025-11-08T00:24:43.052280100Z" level=info msg="CreateContainer within sandbox \"a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:24:43.056560 systemd-networkd[1245]: cali3fc83c98ea4: Link UP Nov 8 00:24:43.057034 systemd-networkd[1245]: cali3fc83c98ea4: Gained carrier Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:42.891 [INFO][4732] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:42.912 [INFO][4732] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0 calico-apiserver-54f8bbf6f- calico-apiserver 9d10b4e8-1a4b-40ad-b663-a53c60424a45 1047 0 2025-11-08 00:24:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54f8bbf6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54f8bbf6f-npzst eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3fc83c98ea4 [] [] }} ContainerID="49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-npzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-" Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:42.912 [INFO][4732] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-npzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:42.942 [INFO][4755] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" HandleID="k8s-pod-network.49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:42.943 [INFO][4755] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" HandleID="k8s-pod-network.49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54f8bbf6f-npzst", "timestamp":"2025-11-08 00:24:42.942991715 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:42.943 [INFO][4755] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:42.948 [INFO][4755] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:42.948 [INFO][4755] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:43.026 [INFO][4755] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" host="localhost" Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:43.030 [INFO][4755] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:43.033 [INFO][4755] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:43.035 [INFO][4755] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:43.037 [INFO][4755] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:43.038 [INFO][4755] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" host="localhost" Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:43.039 [INFO][4755] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:43.046 [INFO][4755] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" host="localhost" Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:43.051 [INFO][4755] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" host="localhost" Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:43.051 [INFO][4755] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" host="localhost" Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:43.051 [INFO][4755] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:43.070340 containerd[1574]: 2025-11-08 00:24:43.051 [INFO][4755] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" HandleID="k8s-pod-network.49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:24:43.071000 containerd[1574]: 2025-11-08 00:24:43.054 [INFO][4732] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-npzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0", GenerateName:"calico-apiserver-54f8bbf6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d10b4e8-1a4b-40ad-b663-a53c60424a45", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f8bbf6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54f8bbf6f-npzst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fc83c98ea4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:43.071000 containerd[1574]: 2025-11-08 00:24:43.054 [INFO][4732] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-npzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:24:43.071000 containerd[1574]: 2025-11-08 00:24:43.054 [INFO][4732] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fc83c98ea4 ContainerID="49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-npzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:24:43.071000 containerd[1574]: 2025-11-08 00:24:43.056 [INFO][4732] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-npzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:24:43.071000 containerd[1574]: 2025-11-08 00:24:43.057 [INFO][4732] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-npzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0", GenerateName:"calico-apiserver-54f8bbf6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d10b4e8-1a4b-40ad-b663-a53c60424a45", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f8bbf6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a", Pod:"calico-apiserver-54f8bbf6f-npzst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fc83c98ea4", MAC:"ae:a0:30:bf:5b:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:43.071000 containerd[1574]: 2025-11-08 00:24:43.066 [INFO][4732] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-npzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:24:43.075695 containerd[1574]: time="2025-11-08T00:24:43.075635150Z" level=info msg="CreateContainer within sandbox \"a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"85ec115b315a6bf0d387dee6ac8d967919f22d9cbea5e21fea958122a1be5963\"" Nov 8 00:24:43.076288 containerd[1574]: time="2025-11-08T00:24:43.076237371Z" level=info msg="StartContainer for \"85ec115b315a6bf0d387dee6ac8d967919f22d9cbea5e21fea958122a1be5963\"" Nov 8 00:24:43.095418 containerd[1574]: time="2025-11-08T00:24:43.095283559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:43.095418 containerd[1574]: time="2025-11-08T00:24:43.095363917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:43.095418 containerd[1574]: time="2025-11-08T00:24:43.095380367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:43.095686 containerd[1574]: time="2025-11-08T00:24:43.095523391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:43.124497 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:24:43.145237 containerd[1574]: time="2025-11-08T00:24:43.144996739Z" level=info msg="StartContainer for \"85ec115b315a6bf0d387dee6ac8d967919f22d9cbea5e21fea958122a1be5963\" returns successfully" Nov 8 00:24:43.159951 containerd[1574]: time="2025-11-08T00:24:43.159897116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f8bbf6f-npzst,Uid:9d10b4e8-1a4b-40ad-b663-a53c60424a45,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a\"" Nov 8 00:24:43.161769 containerd[1574]: time="2025-11-08T00:24:43.161740077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:24:43.504216 containerd[1574]: time="2025-11-08T00:24:43.504146140Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:43.510328 containerd[1574]: time="2025-11-08T00:24:43.510274008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:24:43.510513 containerd[1574]: time="2025-11-08T00:24:43.510374333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:24:43.510565 kubelet[2659]: E1108 00:24:43.510524 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:24:43.510657 kubelet[2659]: E1108 00:24:43.510584 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:24:43.510792 kubelet[2659]: E1108 00:24:43.510734 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lz982,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54f8bbf6f-npzst_calico-apiserver(9d10b4e8-1a4b-40ad-b663-a53c60424a45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:43.512818 kubelet[2659]: E1108 00:24:43.512726 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-npzst" podUID="9d10b4e8-1a4b-40ad-b663-a53c60424a45" Nov 8 00:24:43.721128 containerd[1574]: time="2025-11-08T00:24:43.721063658Z" level=info msg="StopPodSandbox for \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\"" Nov 8 00:24:43.869861 systemd[1]: Started sshd@8-10.0.0.93:22-10.0.0.1:49926.service - OpenSSH per-connection server daemon (10.0.0.1:49926). Nov 8 00:24:43.901100 kubelet[2659]: E1108 00:24:43.901031 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-npzst" podUID="9d10b4e8-1a4b-40ad-b663-a53c60424a45" Nov 8 00:24:43.903006 kubelet[2659]: E1108 00:24:43.902970 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:43.907036 sshd[4939]: Accepted publickey for core from 10.0.0.1 port 49926 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:24:43.909327 sshd[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:43.913740 systemd-logind[1557]: New session 9 of user core. Nov 8 00:24:43.923822 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:24:44.112291 sshd[4939]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:44.119330 systemd-networkd[1245]: cali3fc83c98ea4: Gained IPv6LL Nov 8 00:24:44.122148 systemd[1]: sshd@8-10.0.0.93:22-10.0.0.1:49926.service: Deactivated successfully. Nov 8 00:24:44.125601 containerd[1574]: 2025-11-08 00:24:43.931 [INFO][4932] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Nov 8 00:24:44.125601 containerd[1574]: 2025-11-08 00:24:43.931 [INFO][4932] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" iface="eth0" netns="/var/run/netns/cni-d5dd1b47-e821-14f3-af79-d4e11f7de044" Nov 8 00:24:44.125601 containerd[1574]: 2025-11-08 00:24:43.931 [INFO][4932] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" iface="eth0" netns="/var/run/netns/cni-d5dd1b47-e821-14f3-af79-d4e11f7de044" Nov 8 00:24:44.125601 containerd[1574]: 2025-11-08 00:24:43.932 [INFO][4932] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" iface="eth0" netns="/var/run/netns/cni-d5dd1b47-e821-14f3-af79-d4e11f7de044" Nov 8 00:24:44.125601 containerd[1574]: 2025-11-08 00:24:43.932 [INFO][4932] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Nov 8 00:24:44.125601 containerd[1574]: 2025-11-08 00:24:43.932 [INFO][4932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Nov 8 00:24:44.125601 containerd[1574]: 2025-11-08 00:24:43.953 [INFO][4945] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" HandleID="k8s-pod-network.a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Workload="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:24:44.125601 containerd[1574]: 2025-11-08 00:24:43.953 [INFO][4945] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:44.125601 containerd[1574]: 2025-11-08 00:24:43.953 [INFO][4945] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:44.125601 containerd[1574]: 2025-11-08 00:24:44.104 [WARNING][4945] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" HandleID="k8s-pod-network.a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Workload="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:24:44.125601 containerd[1574]: 2025-11-08 00:24:44.104 [INFO][4945] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" HandleID="k8s-pod-network.a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Workload="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:24:44.125601 containerd[1574]: 2025-11-08 00:24:44.108 [INFO][4945] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:44.125601 containerd[1574]: 2025-11-08 00:24:44.113 [INFO][4932] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Nov 8 00:24:44.131163 systemd[1]: run-netns-cni\x2dd5dd1b47\x2de821\x2d14f3\x2daf79\x2dd4e11f7de044.mount: Deactivated successfully. Nov 8 00:24:44.132441 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:24:44.132566 containerd[1574]: time="2025-11-08T00:24:44.132518827Z" level=info msg="TearDown network for sandbox \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\" successfully" Nov 8 00:24:44.132625 containerd[1574]: time="2025-11-08T00:24:44.132566805Z" level=info msg="StopPodSandbox for \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\" returns successfully" Nov 8 00:24:44.133108 systemd-logind[1557]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:24:44.133777 containerd[1574]: time="2025-11-08T00:24:44.133447582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fd777b6d-qk5xd,Uid:258307e3-fc8b-44da-8b83-06fe3d2024fa,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:24:44.134177 systemd-logind[1557]: Removed session 9. Nov 8 00:24:44.329909 systemd-networkd[1245]: calibe3e4ebe1a8: Link UP Nov 8 00:24:44.330661 systemd-networkd[1245]: calibe3e4ebe1a8: Gained carrier Nov 8 00:24:44.345839 kubelet[2659]: I1108 00:24:44.345771 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kckcx" podStartSLOduration=37.345745302 podStartE2EDuration="37.345745302s" podCreationTimestamp="2025-11-08 00:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:24:44.120792736 +0000 UTC m=+43.499912750" watchObservedRunningTime="2025-11-08 00:24:44.345745302 +0000 UTC m=+43.724865296" Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.235 [INFO][4977] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.249 [INFO][4977] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0 calico-apiserver-65fd777b6d- calico-apiserver 258307e3-fc8b-44da-8b83-06fe3d2024fa 1075 0 2025-11-08 00:24:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65fd777b6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-65fd777b6d-qk5xd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe3e4ebe1a8 [] [] }} ContainerID="d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" Namespace="calico-apiserver" Pod="calico-apiserver-65fd777b6d-qk5xd" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-" Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.249 [INFO][4977] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" Namespace="calico-apiserver" Pod="calico-apiserver-65fd777b6d-qk5xd" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.287 [INFO][4985] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" HandleID="k8s-pod-network.d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" Workload="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.287 [INFO][4985] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" HandleID="k8s-pod-network.d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" Workload="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004352f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-65fd777b6d-qk5xd", "timestamp":"2025-11-08 00:24:44.287559458 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.287 [INFO][4985] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.287 [INFO][4985] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.288 [INFO][4985] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.295 [INFO][4985] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" host="localhost" Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.300 [INFO][4985] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.304 [INFO][4985] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.306 [INFO][4985] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.308 [INFO][4985] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.308 [INFO][4985] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" host="localhost" Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.310 [INFO][4985] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.314 [INFO][4985] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" host="localhost" Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.323 [INFO][4985] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" host="localhost" Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.323 [INFO][4985] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" host="localhost" Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.323 [INFO][4985] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:44.349401 containerd[1574]: 2025-11-08 00:24:44.323 [INFO][4985] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" HandleID="k8s-pod-network.d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" Workload="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:24:44.349959 containerd[1574]: 2025-11-08 00:24:44.327 [INFO][4977] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" Namespace="calico-apiserver" Pod="calico-apiserver-65fd777b6d-qk5xd" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0", GenerateName:"calico-apiserver-65fd777b6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"258307e3-fc8b-44da-8b83-06fe3d2024fa", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fd777b6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-65fd777b6d-qk5xd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe3e4ebe1a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:44.349959 containerd[1574]: 2025-11-08 00:24:44.327 [INFO][4977] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" Namespace="calico-apiserver" Pod="calico-apiserver-65fd777b6d-qk5xd" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:24:44.349959 containerd[1574]: 2025-11-08 00:24:44.327 [INFO][4977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe3e4ebe1a8 ContainerID="d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" Namespace="calico-apiserver" Pod="calico-apiserver-65fd777b6d-qk5xd" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:24:44.349959 containerd[1574]: 2025-11-08 00:24:44.330 [INFO][4977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" Namespace="calico-apiserver" Pod="calico-apiserver-65fd777b6d-qk5xd" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:24:44.349959 containerd[1574]: 2025-11-08 00:24:44.331 [INFO][4977] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" Namespace="calico-apiserver" Pod="calico-apiserver-65fd777b6d-qk5xd" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0", GenerateName:"calico-apiserver-65fd777b6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"258307e3-fc8b-44da-8b83-06fe3d2024fa", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fd777b6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e", Pod:"calico-apiserver-65fd777b6d-qk5xd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe3e4ebe1a8", MAC:"b6:6e:01:7a:63:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:44.349959 containerd[1574]: 2025-11-08 00:24:44.343 [INFO][4977] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e" Namespace="calico-apiserver" Pod="calico-apiserver-65fd777b6d-qk5xd" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:24:44.367660 containerd[1574]: time="2025-11-08T00:24:44.367575068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:44.367817 containerd[1574]: time="2025-11-08T00:24:44.367668530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:44.368494 containerd[1574]: time="2025-11-08T00:24:44.368405972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:44.368615 containerd[1574]: time="2025-11-08T00:24:44.368568783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:44.398723 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:24:44.429285 containerd[1574]: time="2025-11-08T00:24:44.429242463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fd777b6d-qk5xd,Uid:258307e3-fc8b-44da-8b83-06fe3d2024fa,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e\"" Nov 8 00:24:44.430988 containerd[1574]: time="2025-11-08T00:24:44.430958283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:24:44.693567 systemd-networkd[1245]: cali29e1f61b725: Gained IPv6LL Nov 8 00:24:44.721705 containerd[1574]: time="2025-11-08T00:24:44.721669994Z" level=info msg="StopPodSandbox for \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\"" Nov 8 00:24:44.722100 containerd[1574]: time="2025-11-08T00:24:44.722061407Z" level=info msg="StopPodSandbox for \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\"" Nov 8 00:24:44.722539 containerd[1574]: time="2025-11-08T00:24:44.722288637Z" level=info msg="StopPodSandbox for \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\"" Nov 8 00:24:44.832849 containerd[1574]: 2025-11-08 00:24:44.777 [INFO][5075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Nov 8 00:24:44.832849 containerd[1574]: 2025-11-08 00:24:44.778 [INFO][5075] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" iface="eth0" netns="/var/run/netns/cni-70279caa-8e18-a236-8522-6e3df374e997" Nov 8 00:24:44.832849 containerd[1574]: 2025-11-08 00:24:44.778 [INFO][5075] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" iface="eth0" netns="/var/run/netns/cni-70279caa-8e18-a236-8522-6e3df374e997" Nov 8 00:24:44.832849 containerd[1574]: 2025-11-08 00:24:44.779 [INFO][5075] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" iface="eth0" netns="/var/run/netns/cni-70279caa-8e18-a236-8522-6e3df374e997" Nov 8 00:24:44.832849 containerd[1574]: 2025-11-08 00:24:44.779 [INFO][5075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Nov 8 00:24:44.832849 containerd[1574]: 2025-11-08 00:24:44.779 [INFO][5075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Nov 8 00:24:44.832849 containerd[1574]: 2025-11-08 00:24:44.808 [INFO][5109] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" HandleID="k8s-pod-network.f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Workload="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:24:44.832849 containerd[1574]: 2025-11-08 00:24:44.808 [INFO][5109] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:44.832849 containerd[1574]: 2025-11-08 00:24:44.808 [INFO][5109] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:44.832849 containerd[1574]: 2025-11-08 00:24:44.817 [WARNING][5109] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" HandleID="k8s-pod-network.f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Workload="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:24:44.832849 containerd[1574]: 2025-11-08 00:24:44.818 [INFO][5109] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" HandleID="k8s-pod-network.f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Workload="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:24:44.832849 containerd[1574]: 2025-11-08 00:24:44.819 [INFO][5109] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:44.832849 containerd[1574]: 2025-11-08 00:24:44.824 [INFO][5075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Nov 8 00:24:44.837249 containerd[1574]: time="2025-11-08T00:24:44.835527159Z" level=info msg="TearDown network for sandbox \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\" successfully" Nov 8 00:24:44.837249 containerd[1574]: time="2025-11-08T00:24:44.835568615Z" level=info msg="StopPodSandbox for \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\" returns successfully" Nov 8 00:24:44.837390 kubelet[2659]: E1108 00:24:44.835939 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:44.837771 containerd[1574]: time="2025-11-08T00:24:44.837740847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzrnf,Uid:10e98c86-5971-470b-a7fe-1df841b99600,Namespace:kube-system,Attempt:1,}" Nov 8 00:24:44.845823 systemd[1]: run-netns-cni\x2d70279caa\x2d8e18\x2da236\x2d8522\x2d6e3df374e997.mount: Deactivated successfully. Nov 8 00:24:44.850335 containerd[1574]: 2025-11-08 00:24:44.773 [INFO][5074] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Nov 8 00:24:44.850335 containerd[1574]: 2025-11-08 00:24:44.774 [INFO][5074] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" iface="eth0" netns="/var/run/netns/cni-00fb133a-7396-ffd1-3bdf-4d40fc1aba6b" Nov 8 00:24:44.850335 containerd[1574]: 2025-11-08 00:24:44.774 [INFO][5074] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" iface="eth0" netns="/var/run/netns/cni-00fb133a-7396-ffd1-3bdf-4d40fc1aba6b" Nov 8 00:24:44.850335 containerd[1574]: 2025-11-08 00:24:44.774 [INFO][5074] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" iface="eth0" netns="/var/run/netns/cni-00fb133a-7396-ffd1-3bdf-4d40fc1aba6b" Nov 8 00:24:44.850335 containerd[1574]: 2025-11-08 00:24:44.774 [INFO][5074] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Nov 8 00:24:44.850335 containerd[1574]: 2025-11-08 00:24:44.774 [INFO][5074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Nov 8 00:24:44.850335 containerd[1574]: 2025-11-08 00:24:44.808 [INFO][5100] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" HandleID="k8s-pod-network.3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:24:44.850335 containerd[1574]: 2025-11-08 00:24:44.808 [INFO][5100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:44.850335 containerd[1574]: 2025-11-08 00:24:44.819 [INFO][5100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:44.850335 containerd[1574]: 2025-11-08 00:24:44.831 [WARNING][5100] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" HandleID="k8s-pod-network.3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:24:44.850335 containerd[1574]: 2025-11-08 00:24:44.831 [INFO][5100] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" HandleID="k8s-pod-network.3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:24:44.850335 containerd[1574]: 2025-11-08 00:24:44.833 [INFO][5100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:44.850335 containerd[1574]: 2025-11-08 00:24:44.840 [INFO][5074] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Nov 8 00:24:44.853482 containerd[1574]: time="2025-11-08T00:24:44.852884942Z" level=info msg="TearDown network for sandbox \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\" successfully" Nov 8 00:24:44.853482 containerd[1574]: time="2025-11-08T00:24:44.852948389Z" level=info msg="StopPodSandbox for \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\" returns successfully" Nov 8 00:24:44.856883 containerd[1574]: time="2025-11-08T00:24:44.856837925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f8bbf6f-szk2c,Uid:a83aa0bc-f007-4d1f-95cf-997e1c8ab851,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:24:44.857724 systemd[1]: run-netns-cni\x2d00fb133a\x2d7396\x2dffd1\x2d3bdf\x2d4d40fc1aba6b.mount: Deactivated successfully. Nov 8 00:24:44.870556 containerd[1574]: 2025-11-08 00:24:44.774 [INFO][5087] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Nov 8 00:24:44.870556 containerd[1574]: 2025-11-08 00:24:44.774 [INFO][5087] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" iface="eth0" netns="/var/run/netns/cni-0992fdc2-24e2-eb16-837f-b417449f0c72" Nov 8 00:24:44.870556 containerd[1574]: 2025-11-08 00:24:44.774 [INFO][5087] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" iface="eth0" netns="/var/run/netns/cni-0992fdc2-24e2-eb16-837f-b417449f0c72" Nov 8 00:24:44.870556 containerd[1574]: 2025-11-08 00:24:44.775 [INFO][5087] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" iface="eth0" netns="/var/run/netns/cni-0992fdc2-24e2-eb16-837f-b417449f0c72" Nov 8 00:24:44.870556 containerd[1574]: 2025-11-08 00:24:44.775 [INFO][5087] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Nov 8 00:24:44.870556 containerd[1574]: 2025-11-08 00:24:44.775 [INFO][5087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Nov 8 00:24:44.870556 containerd[1574]: 2025-11-08 00:24:44.813 [INFO][5102] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" HandleID="k8s-pod-network.55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Workload="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:24:44.870556 containerd[1574]: 2025-11-08 00:24:44.814 [INFO][5102] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:44.870556 containerd[1574]: 2025-11-08 00:24:44.840 [INFO][5102] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:44.870556 containerd[1574]: 2025-11-08 00:24:44.855 [WARNING][5102] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" HandleID="k8s-pod-network.55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Workload="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:24:44.870556 containerd[1574]: 2025-11-08 00:24:44.855 [INFO][5102] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" HandleID="k8s-pod-network.55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Workload="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:24:44.870556 containerd[1574]: 2025-11-08 00:24:44.858 [INFO][5102] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:44.870556 containerd[1574]: 2025-11-08 00:24:44.865 [INFO][5087] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Nov 8 00:24:44.871590 containerd[1574]: time="2025-11-08T00:24:44.871460647Z" level=info msg="TearDown network for sandbox \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\" successfully" Nov 8 00:24:44.871590 containerd[1574]: time="2025-11-08T00:24:44.871494599Z" level=info msg="StopPodSandbox for \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\" returns successfully" Nov 8 00:24:44.872490 containerd[1574]: time="2025-11-08T00:24:44.872462688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5mbvz,Uid:9805d816-e7c8-479d-9360-d3b3efa64586,Namespace:calico-system,Attempt:1,}" Nov 8 00:24:44.884276 systemd[1]: run-netns-cni\x2d0992fdc2\x2d24e2\x2deb16\x2d837f\x2db417449f0c72.mount: Deactivated successfully. Nov 8 00:24:44.917225 containerd[1574]: time="2025-11-08T00:24:44.917114571Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:44.922686 kubelet[2659]: E1108 00:24:44.922648 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:44.924396 kubelet[2659]: E1108 00:24:44.924341 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-npzst" podUID="9d10b4e8-1a4b-40ad-b663-a53c60424a45" Nov 8 00:24:44.925441 containerd[1574]: time="2025-11-08T00:24:44.925392262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:24:44.925633 containerd[1574]: time="2025-11-08T00:24:44.925548702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:24:44.931758 kubelet[2659]: E1108 00:24:44.926251 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:24:44.931758 kubelet[2659]: E1108 00:24:44.926308 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:24:44.931758 kubelet[2659]: E1108 00:24:44.931663 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7df28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65fd777b6d-qk5xd_calico-apiserver(258307e3-fc8b-44da-8b83-06fe3d2024fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:44.933451 kubelet[2659]: E1108 00:24:44.933387 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fd777b6d-qk5xd" podUID="258307e3-fc8b-44da-8b83-06fe3d2024fa" Nov 8 00:24:45.369004 systemd-networkd[1245]: cali7e4d9005998: Link UP Nov 8 00:24:45.370335 systemd-networkd[1245]: cali7e4d9005998: Gained carrier Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:44.942 [INFO][5138] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.276 [INFO][5138] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0 coredns-668d6bf9bc- kube-system 10e98c86-5971-470b-a7fe-1df841b99600 1102 0 2025-11-08 00:24:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-rzrnf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7e4d9005998 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzrnf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzrnf-" Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.276 [INFO][5138] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzrnf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.327 [INFO][5191] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" HandleID="k8s-pod-network.4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" Workload="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.327 [INFO][5191] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" HandleID="k8s-pod-network.4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" Workload="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00053e540), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-rzrnf", "timestamp":"2025-11-08 00:24:45.327411581 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.327 [INFO][5191] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.327 [INFO][5191] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.327 [INFO][5191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.335 [INFO][5191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" host="localhost" Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.342 [INFO][5191] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.346 [INFO][5191] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.348 [INFO][5191] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.351 [INFO][5191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.351 [INFO][5191] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" host="localhost" Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.353 [INFO][5191] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136 Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.357 [INFO][5191] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" host="localhost" Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.362 [INFO][5191] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" host="localhost" Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.362 [INFO][5191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" host="localhost" Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.362 [INFO][5191] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:45.382094 containerd[1574]: 2025-11-08 00:24:45.362 [INFO][5191] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" HandleID="k8s-pod-network.4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" Workload="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:24:45.383007 containerd[1574]: 2025-11-08 00:24:45.365 [INFO][5138] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzrnf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"10e98c86-5971-470b-a7fe-1df841b99600", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-rzrnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e4d9005998", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:45.383007 containerd[1574]: 2025-11-08 00:24:45.366 [INFO][5138] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzrnf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:24:45.383007 containerd[1574]: 2025-11-08 00:24:45.366 [INFO][5138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e4d9005998 ContainerID="4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzrnf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:24:45.383007 containerd[1574]: 2025-11-08 00:24:45.370 [INFO][5138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzrnf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:24:45.383007 containerd[1574]: 2025-11-08 00:24:45.371 [INFO][5138] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzrnf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"10e98c86-5971-470b-a7fe-1df841b99600", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136", Pod:"coredns-668d6bf9bc-rzrnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e4d9005998", MAC:"fa:9b:f0:76:f3:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:45.383007 containerd[1574]: 2025-11-08 00:24:45.379 [INFO][5138] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzrnf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:24:45.401540 containerd[1574]: time="2025-11-08T00:24:45.401258595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:45.401540 containerd[1574]: time="2025-11-08T00:24:45.401313968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:45.401540 containerd[1574]: time="2025-11-08T00:24:45.401324358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:45.401540 containerd[1574]: time="2025-11-08T00:24:45.401449639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:45.437524 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:24:45.474881 containerd[1574]: time="2025-11-08T00:24:45.474834489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzrnf,Uid:10e98c86-5971-470b-a7fe-1df841b99600,Namespace:kube-system,Attempt:1,} returns sandbox id \"4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136\"" Nov 8 00:24:45.476756 systemd-networkd[1245]: calie3b8347fa91: Link UP Nov 8 00:24:45.477247 kubelet[2659]: E1108 00:24:45.476973 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:45.477371 systemd-networkd[1245]: calie3b8347fa91: Gained carrier Nov 8 00:24:45.481698 containerd[1574]: time="2025-11-08T00:24:45.481657425Z" level=info msg="CreateContainer within sandbox \"4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:44.945 [INFO][5165] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.276 [INFO][5165] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5mbvz-eth0 csi-node-driver- calico-system 9805d816-e7c8-479d-9360-d3b3efa64586 1101 0 2025-11-08 00:24:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5mbvz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie3b8347fa91 [] [] }} ContainerID="da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" Namespace="calico-system" Pod="csi-node-driver-5mbvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mbvz-" Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.276 [INFO][5165] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" Namespace="calico-system" Pod="csi-node-driver-5mbvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.332 [INFO][5189] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" HandleID="k8s-pod-network.da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" Workload="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.332 [INFO][5189] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" HandleID="k8s-pod-network.da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" Workload="localhost-k8s-csi--node--driver--5mbvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000494a60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5mbvz", "timestamp":"2025-11-08 00:24:45.332001618 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.332 [INFO][5189] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.362 [INFO][5189] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.362 [INFO][5189] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.436 [INFO][5189] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" host="localhost" Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.442 [INFO][5189] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.446 [INFO][5189] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.448 [INFO][5189] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.450 [INFO][5189] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.450 [INFO][5189] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" host="localhost" Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.455 [INFO][5189] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.460 [INFO][5189] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" host="localhost" Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.466 [INFO][5189] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" host="localhost" Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.467 [INFO][5189] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" host="localhost" Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.467 [INFO][5189] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:45.492089 containerd[1574]: 2025-11-08 00:24:45.467 [INFO][5189] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" HandleID="k8s-pod-network.da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" Workload="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:24:45.492678 containerd[1574]: 2025-11-08 00:24:45.473 [INFO][5165] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" Namespace="calico-system" Pod="csi-node-driver-5mbvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mbvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5mbvz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9805d816-e7c8-479d-9360-d3b3efa64586", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5mbvz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie3b8347fa91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:45.492678 containerd[1574]: 2025-11-08 00:24:45.473 [INFO][5165] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" Namespace="calico-system" Pod="csi-node-driver-5mbvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:24:45.492678 containerd[1574]: 2025-11-08 00:24:45.474 [INFO][5165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3b8347fa91 ContainerID="da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" Namespace="calico-system" Pod="csi-node-driver-5mbvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:24:45.492678 containerd[1574]: 2025-11-08 00:24:45.478 [INFO][5165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" Namespace="calico-system" Pod="csi-node-driver-5mbvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:24:45.492678 containerd[1574]: 2025-11-08 00:24:45.478 [INFO][5165] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" Namespace="calico-system" Pod="csi-node-driver-5mbvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mbvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5mbvz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9805d816-e7c8-479d-9360-d3b3efa64586", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b", Pod:"csi-node-driver-5mbvz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie3b8347fa91", MAC:"06:64:32:9e:e3:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:45.492678 containerd[1574]: 2025-11-08 00:24:45.489 [INFO][5165] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b" Namespace="calico-system" Pod="csi-node-driver-5mbvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:24:45.500866 containerd[1574]: time="2025-11-08T00:24:45.500761318Z" level=info msg="CreateContainer within sandbox \"4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c3054726736f2ae166ef8236d6c3099904d2e9833f69ac2c3cec4782cbb146fc\"" Nov 8 00:24:45.501460 containerd[1574]: time="2025-11-08T00:24:45.501355306Z" level=info msg="StartContainer for \"c3054726736f2ae166ef8236d6c3099904d2e9833f69ac2c3cec4782cbb146fc\"" Nov 8 00:24:45.513560 containerd[1574]: time="2025-11-08T00:24:45.512774120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:45.513560 containerd[1574]: time="2025-11-08T00:24:45.513491637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:45.513560 containerd[1574]: time="2025-11-08T00:24:45.513512836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:45.513706 containerd[1574]: time="2025-11-08T00:24:45.513638478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:45.545875 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:24:45.572049 containerd[1574]: time="2025-11-08T00:24:45.570732346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5mbvz,Uid:9805d816-e7c8-479d-9360-d3b3efa64586,Namespace:calico-system,Attempt:1,} returns sandbox id \"da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b\"" Nov 8 00:24:45.572655 containerd[1574]: time="2025-11-08T00:24:45.572565385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:24:45.574099 containerd[1574]: time="2025-11-08T00:24:45.573138335Z" level=info msg="StartContainer for \"c3054726736f2ae166ef8236d6c3099904d2e9833f69ac2c3cec4782cbb146fc\" returns successfully" Nov 8 00:24:45.584733 systemd-networkd[1245]: cali340381e6781: Link UP Nov 8 00:24:45.586733 systemd-networkd[1245]: cali340381e6781: Gained carrier Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:44.931 [INFO][5151] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.276 [INFO][5151] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0 calico-apiserver-54f8bbf6f- calico-apiserver a83aa0bc-f007-4d1f-95cf-997e1c8ab851 1100 0 2025-11-08 00:24:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54f8bbf6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54f8bbf6f-szk2c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali340381e6781 [] [] }} ContainerID="ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-szk2c" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-" Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.276 [INFO][5151] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-szk2c" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.350 [INFO][5200] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" HandleID="k8s-pod-network.ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.350 [INFO][5200] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" HandleID="k8s-pod-network.ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000596a90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54f8bbf6f-szk2c", "timestamp":"2025-11-08 00:24:45.350244159 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.350 [INFO][5200] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.467 [INFO][5200] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.467 [INFO][5200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.538 [INFO][5200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" host="localhost" Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.544 [INFO][5200] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.549 [INFO][5200] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.551 [INFO][5200] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.554 [INFO][5200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.554 [INFO][5200] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" host="localhost" Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.557 [INFO][5200] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.565 [INFO][5200] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" host="localhost" Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.572 [INFO][5200] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" host="localhost" Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.573 [INFO][5200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" host="localhost" Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.573 [INFO][5200] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:24:45.609192 containerd[1574]: 2025-11-08 00:24:45.574 [INFO][5200] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" HandleID="k8s-pod-network.ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:24:45.609850 containerd[1574]: 2025-11-08 00:24:45.578 [INFO][5151] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-szk2c" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0", GenerateName:"calico-apiserver-54f8bbf6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a83aa0bc-f007-4d1f-95cf-997e1c8ab851", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f8bbf6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54f8bbf6f-szk2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali340381e6781", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:45.609850 containerd[1574]: 2025-11-08 00:24:45.579 [INFO][5151] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-szk2c" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:24:45.609850 containerd[1574]: 2025-11-08 00:24:45.579 [INFO][5151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali340381e6781 ContainerID="ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-szk2c" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:24:45.609850 containerd[1574]: 2025-11-08 00:24:45.589 [INFO][5151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-szk2c" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:24:45.609850 containerd[1574]: 2025-11-08 00:24:45.589 [INFO][5151] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-szk2c" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0", GenerateName:"calico-apiserver-54f8bbf6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a83aa0bc-f007-4d1f-95cf-997e1c8ab851", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f8bbf6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c", Pod:"calico-apiserver-54f8bbf6f-szk2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali340381e6781", MAC:"32:9a:ff:d8:fe:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:24:45.609850 containerd[1574]: 2025-11-08 00:24:45.604 [INFO][5151] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c" Namespace="calico-apiserver" Pod="calico-apiserver-54f8bbf6f-szk2c" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:24:45.636396 containerd[1574]: time="2025-11-08T00:24:45.635865068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:24:45.636396 containerd[1574]: time="2025-11-08T00:24:45.636037366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:24:45.636396 containerd[1574]: time="2025-11-08T00:24:45.636077080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:45.637545 containerd[1574]: time="2025-11-08T00:24:45.637394555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:24:45.668173 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:24:45.703416 containerd[1574]: time="2025-11-08T00:24:45.703356250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f8bbf6f-szk2c,Uid:a83aa0bc-f007-4d1f-95cf-997e1c8ab851,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c\"" Nov 8 00:24:45.907771 containerd[1574]: time="2025-11-08T00:24:45.907565186Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:45.921787 containerd[1574]: time="2025-11-08T00:24:45.921716402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:24:45.921953 containerd[1574]: time="2025-11-08T00:24:45.921839119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:24:45.923095 kubelet[2659]: E1108 00:24:45.922102 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:24:45.923095 kubelet[2659]: E1108 00:24:45.922168 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:24:45.923095 kubelet[2659]: E1108 00:24:45.922414 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjcwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5mbvz_calico-system(9805d816-e7c8-479d-9360-d3b3efa64586): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:45.924303 containerd[1574]: time="2025-11-08T00:24:45.924203040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:24:45.929556 kubelet[2659]: E1108 00:24:45.929526 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:45.929880 kubelet[2659]: E1108 00:24:45.929846 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:45.931125 kubelet[2659]: E1108 00:24:45.931094 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fd777b6d-qk5xd" podUID="258307e3-fc8b-44da-8b83-06fe3d2024fa" Nov 8 00:24:46.299127 kubelet[2659]: I1108 00:24:46.298984 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:24:46.299446 kubelet[2659]: E1108 00:24:46.299400 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:46.341655 containerd[1574]: time="2025-11-08T00:24:46.341616674Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:46.357618 systemd-networkd[1245]: calibe3e4ebe1a8: Gained IPv6LL Nov 8 00:24:46.414167 containerd[1574]: time="2025-11-08T00:24:46.414091762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:24:46.414922 containerd[1574]: time="2025-11-08T00:24:46.414176047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:24:46.414978 kubelet[2659]: E1108 00:24:46.414408 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:24:46.414978 kubelet[2659]: E1108 00:24:46.414508 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:24:46.414978 kubelet[2659]: E1108 00:24:46.414796 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmh8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54f8bbf6f-szk2c_calico-apiserver(a83aa0bc-f007-4d1f-95cf-997e1c8ab851): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:46.415454 containerd[1574]: time="2025-11-08T00:24:46.415355650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:24:46.415949 kubelet[2659]: E1108 00:24:46.415905 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-szk2c" podUID="a83aa0bc-f007-4d1f-95cf-997e1c8ab851" Nov 8 00:24:46.675633 kubelet[2659]: I1108 00:24:46.675562 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rzrnf" podStartSLOduration=39.675541808 podStartE2EDuration="39.675541808s" podCreationTimestamp="2025-11-08 00:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:24:46.674714047 +0000 UTC m=+46.053834051" watchObservedRunningTime="2025-11-08 00:24:46.675541808 +0000 UTC m=+46.054661802" Nov 8 00:24:46.741760 systemd-networkd[1245]: cali7e4d9005998: Gained IPv6LL Nov 8 00:24:46.806392 systemd-networkd[1245]: cali340381e6781: Gained IPv6LL Nov 8 00:24:46.818245 containerd[1574]: time="2025-11-08T00:24:46.818186245Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:46.820239 containerd[1574]: time="2025-11-08T00:24:46.819832623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:24:46.820239 containerd[1574]: time="2025-11-08T00:24:46.819892172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:24:46.820362 kubelet[2659]: E1108 00:24:46.820234 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:24:46.820362 kubelet[2659]: E1108 00:24:46.820311 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:24:46.820580 kubelet[2659]: E1108 00:24:46.820507 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjcwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5mbvz_calico-system(9805d816-e7c8-479d-9360-d3b3efa64586): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:46.822010 kubelet[2659]: E1108 00:24:46.821926 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5mbvz" podUID="9805d816-e7c8-479d-9360-d3b3efa64586" Nov 8 00:24:46.918551 kernel: bpftool[5471]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:24:46.933209 kubelet[2659]: E1108 00:24:46.932042 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:46.933209 kubelet[2659]: E1108 00:24:46.932760 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:46.934088 kubelet[2659]: E1108 00:24:46.934015 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-szk2c" podUID="a83aa0bc-f007-4d1f-95cf-997e1c8ab851" Nov 8 00:24:46.934398 kubelet[2659]: E1108 00:24:46.934359 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5mbvz" podUID="9805d816-e7c8-479d-9360-d3b3efa64586" Nov 8 00:24:47.225329 systemd-networkd[1245]: vxlan.calico: Link UP Nov 8 00:24:47.225342 systemd-networkd[1245]: vxlan.calico: Gained carrier Nov 8 00:24:47.381646 systemd-networkd[1245]: calie3b8347fa91: Gained IPv6LL Nov 8 00:24:47.934209 kubelet[2659]: E1108 00:24:47.934151 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:24:48.405754 systemd-networkd[1245]: vxlan.calico: Gained IPv6LL Nov 8 00:24:49.123721 systemd[1]: Started sshd@9-10.0.0.93:22-10.0.0.1:47914.service - OpenSSH per-connection server daemon (10.0.0.1:47914). Nov 8 00:24:49.159943 sshd[5574]: Accepted publickey for core from 10.0.0.1 port 47914 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:24:49.161784 sshd[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:49.167130 systemd-logind[1557]: New session 10 of user core. Nov 8 00:24:49.179813 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:24:49.325470 sshd[5574]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:49.329017 systemd[1]: sshd@9-10.0.0.93:22-10.0.0.1:47914.service: Deactivated successfully. Nov 8 00:24:49.329511 systemd-logind[1557]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:24:49.334070 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:24:49.335416 systemd-logind[1557]: Removed session 10. Nov 8 00:24:50.727761 containerd[1574]: time="2025-11-08T00:24:50.727718009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:24:51.169296 containerd[1574]: time="2025-11-08T00:24:51.169207468Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:51.170925 containerd[1574]: time="2025-11-08T00:24:51.170843607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:24:51.170990 containerd[1574]: time="2025-11-08T00:24:51.170893720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:24:51.171211 kubelet[2659]: E1108 00:24:51.171143 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:24:51.171775 kubelet[2659]: E1108 00:24:51.171218 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:24:51.171775 kubelet[2659]: E1108 00:24:51.171366 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:168e7d9989574572b8f93af229b1409b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85wsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85bb64496c-96z92_calico-system(497820bc-a22f-4ed0-899b-b37a4c4036b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:51.173926 containerd[1574]: time="2025-11-08T00:24:51.173577465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:24:51.515893 containerd[1574]: time="2025-11-08T00:24:51.515683488Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:51.561130 containerd[1574]: time="2025-11-08T00:24:51.561037813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:24:51.561351 containerd[1574]: time="2025-11-08T00:24:51.561098426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:24:51.561500 kubelet[2659]: E1108 00:24:51.561409 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:24:51.561570 kubelet[2659]: E1108 00:24:51.561520 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:24:51.561728 kubelet[2659]: E1108 00:24:51.561676 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85wsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85bb64496c-96z92_calico-system(497820bc-a22f-4ed0-899b-b37a4c4036b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:51.563014 kubelet[2659]: E1108 00:24:51.562935 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85bb64496c-96z92" podUID="497820bc-a22f-4ed0-899b-b37a4c4036b5" Nov 8 00:24:53.721920 containerd[1574]: time="2025-11-08T00:24:53.721798239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:24:54.141509 containerd[1574]: time="2025-11-08T00:24:54.141414283Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:54.142835 containerd[1574]: time="2025-11-08T00:24:54.142784242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:24:54.143038 containerd[1574]: time="2025-11-08T00:24:54.142878157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:24:54.143124 kubelet[2659]: E1108 00:24:54.143073 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:24:54.143653 kubelet[2659]: E1108 00:24:54.143140 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:24:54.143653 kubelet[2659]: E1108 00:24:54.143314 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndblm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-999d4cc44-xrwzd_calico-system(060a07da-b44f-4d4c-ae28-2a94dae48d16): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:54.144546 kubelet[2659]: E1108 00:24:54.144505 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-999d4cc44-xrwzd" podUID="060a07da-b44f-4d4c-ae28-2a94dae48d16" Nov 8 00:24:54.339829 systemd[1]: Started sshd@10-10.0.0.93:22-10.0.0.1:47926.service - OpenSSH per-connection server daemon (10.0.0.1:47926). Nov 8 00:24:54.370769 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 47926 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:24:54.372536 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:54.377773 systemd-logind[1557]: New session 11 of user core. Nov 8 00:24:54.390784 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:24:54.534041 sshd[5598]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:54.542109 systemd[1]: sshd@10-10.0.0.93:22-10.0.0.1:47926.service: Deactivated successfully. Nov 8 00:24:54.545468 systemd-logind[1557]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:24:54.545632 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:24:54.546707 systemd-logind[1557]: Removed session 11. Nov 8 00:24:56.724324 containerd[1574]: time="2025-11-08T00:24:56.723834182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:24:57.181228 containerd[1574]: time="2025-11-08T00:24:57.181164512Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:57.230331 containerd[1574]: time="2025-11-08T00:24:57.230251321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:24:57.230558 containerd[1574]: time="2025-11-08T00:24:57.230321181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:24:57.230624 kubelet[2659]: E1108 00:24:57.230566 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:24:57.231166 kubelet[2659]: E1108 00:24:57.230642 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:24:57.231166 kubelet[2659]: E1108 00:24:57.230911 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7df28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65fd777b6d-qk5xd_calico-apiserver(258307e3-fc8b-44da-8b83-06fe3d2024fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:57.231339 containerd[1574]: time="2025-11-08T00:24:57.231165755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:24:57.232269 kubelet[2659]: E1108 00:24:57.232236 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fd777b6d-qk5xd" podUID="258307e3-fc8b-44da-8b83-06fe3d2024fa" Nov 8 00:24:58.252562 containerd[1574]: time="2025-11-08T00:24:58.252473884Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:58.269126 containerd[1574]: time="2025-11-08T00:24:58.269037578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:24:58.269126 containerd[1574]: time="2025-11-08T00:24:58.269088934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:24:58.269362 kubelet[2659]: E1108 00:24:58.269313 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:24:58.269857 kubelet[2659]: E1108 00:24:58.269389 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:24:58.269857 kubelet[2659]: E1108 00:24:58.269550 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvmp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z8t5h_calico-system(e4d2644c-0b49-478b-8030-eea32781a579): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:58.270827 kubelet[2659]: E1108 00:24:58.270768 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t5h" podUID="e4d2644c-0b49-478b-8030-eea32781a579" Nov 8 00:24:58.722815 containerd[1574]: time="2025-11-08T00:24:58.722739778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:24:59.067065 containerd[1574]: time="2025-11-08T00:24:59.066878420Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:59.068258 containerd[1574]: time="2025-11-08T00:24:59.068219039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:24:59.068350 containerd[1574]: time="2025-11-08T00:24:59.068258704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:24:59.068574 kubelet[2659]: E1108 00:24:59.068515 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:24:59.068652 kubelet[2659]: E1108 00:24:59.068595 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:24:59.068823 kubelet[2659]: E1108 00:24:59.068778 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjcwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5mbvz_calico-system(9805d816-e7c8-479d-9360-d3b3efa64586): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:59.070770 containerd[1574]: time="2025-11-08T00:24:59.070737837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:24:59.399174 containerd[1574]: time="2025-11-08T00:24:59.399094790Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:24:59.400487 containerd[1574]: time="2025-11-08T00:24:59.400372232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:24:59.400661 containerd[1574]: time="2025-11-08T00:24:59.400461298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:24:59.400849 kubelet[2659]: E1108 00:24:59.400786 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:24:59.401184 kubelet[2659]: E1108 00:24:59.400862 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:24:59.401184 kubelet[2659]: E1108 00:24:59.401027 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjcwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5mbvz_calico-system(9805d816-e7c8-479d-9360-d3b3efa64586): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:24:59.402273 kubelet[2659]: E1108 00:24:59.402230 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5mbvz" podUID="9805d816-e7c8-479d-9360-d3b3efa64586" Nov 8 00:24:59.549880 systemd[1]: Started sshd@11-10.0.0.93:22-10.0.0.1:54892.service - OpenSSH per-connection server daemon (10.0.0.1:54892). Nov 8 00:24:59.584731 sshd[5622]: Accepted publickey for core from 10.0.0.1 port 54892 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:24:59.586825 sshd[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:59.592225 systemd-logind[1557]: New session 12 of user core. Nov 8 00:24:59.603982 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:24:59.723093 containerd[1574]: time="2025-11-08T00:24:59.722862555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:24:59.781642 sshd[5622]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:59.789941 systemd[1]: Started sshd@12-10.0.0.93:22-10.0.0.1:54896.service - OpenSSH per-connection server daemon (10.0.0.1:54896). Nov 8 00:24:59.790602 systemd[1]: sshd@11-10.0.0.93:22-10.0.0.1:54892.service: Deactivated successfully. Nov 8 00:24:59.794997 systemd-logind[1557]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:24:59.796091 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:24:59.797333 systemd-logind[1557]: Removed session 12. Nov 8 00:24:59.836715 sshd[5638]: Accepted publickey for core from 10.0.0.1 port 54896 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:24:59.838896 sshd[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:59.844240 systemd-logind[1557]: New session 13 of user core. Nov 8 00:24:59.852748 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:25:00.057247 sshd[5638]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:00.069668 systemd[1]: Started sshd@13-10.0.0.93:22-10.0.0.1:54906.service - OpenSSH per-connection server daemon (10.0.0.1:54906). Nov 8 00:25:00.080478 containerd[1574]: time="2025-11-08T00:25:00.080412647Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:25:00.097159 systemd[1]: sshd@12-10.0.0.93:22-10.0.0.1:54896.service: Deactivated successfully. Nov 8 00:25:00.099881 systemd-logind[1557]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:25:00.099965 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:25:00.101682 systemd-logind[1557]: Removed session 13. Nov 8 00:25:00.124715 sshd[5651]: Accepted publickey for core from 10.0.0.1 port 54906 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:25:00.126726 sshd[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:00.132065 systemd-logind[1557]: New session 14 of user core. Nov 8 00:25:00.141828 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:25:00.249473 containerd[1574]: time="2025-11-08T00:25:00.249251762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:25:00.249473 containerd[1574]: time="2025-11-08T00:25:00.249406801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:25:00.249874 kubelet[2659]: E1108 00:25:00.249829 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:25:00.249993 kubelet[2659]: E1108 00:25:00.249892 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:25:00.250066 kubelet[2659]: E1108 00:25:00.250031 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lz982,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54f8bbf6f-npzst_calico-apiserver(9d10b4e8-1a4b-40ad-b663-a53c60424a45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:25:00.251252 kubelet[2659]: E1108 00:25:00.251226 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-npzst" podUID="9d10b4e8-1a4b-40ad-b663-a53c60424a45" Nov 8 00:25:00.308807 sshd[5651]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:00.316505 systemd[1]: sshd@13-10.0.0.93:22-10.0.0.1:54906.service: Deactivated successfully. Nov 8 00:25:00.321308 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:25:00.323167 systemd-logind[1557]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:25:00.324249 systemd-logind[1557]: Removed session 14. Nov 8 00:25:00.698034 containerd[1574]: time="2025-11-08T00:25:00.697983325Z" level=info msg="StopPodSandbox for \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\"" Nov 8 00:25:00.782603 containerd[1574]: 2025-11-08 00:25:00.737 [WARNING][5680] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--z8t5h-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e4d2644c-0b49-478b-8030-eea32781a579", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582", Pod:"goldmane-666569f655-z8t5h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali04d21bd8d0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:00.782603 containerd[1574]: 2025-11-08 00:25:00.738 [INFO][5680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Nov 8 00:25:00.782603 containerd[1574]: 2025-11-08 00:25:00.738 [INFO][5680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" iface="eth0" netns="" Nov 8 00:25:00.782603 containerd[1574]: 2025-11-08 00:25:00.738 [INFO][5680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Nov 8 00:25:00.782603 containerd[1574]: 2025-11-08 00:25:00.738 [INFO][5680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Nov 8 00:25:00.782603 containerd[1574]: 2025-11-08 00:25:00.766 [INFO][5691] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" HandleID="k8s-pod-network.4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Workload="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:25:00.782603 containerd[1574]: 2025-11-08 00:25:00.766 [INFO][5691] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:00.782603 containerd[1574]: 2025-11-08 00:25:00.766 [INFO][5691] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:00.782603 containerd[1574]: 2025-11-08 00:25:00.773 [WARNING][5691] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" HandleID="k8s-pod-network.4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Workload="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:25:00.782603 containerd[1574]: 2025-11-08 00:25:00.773 [INFO][5691] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" HandleID="k8s-pod-network.4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Workload="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:25:00.782603 containerd[1574]: 2025-11-08 00:25:00.775 [INFO][5691] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:00.782603 containerd[1574]: 2025-11-08 00:25:00.778 [INFO][5680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Nov 8 00:25:00.783210 containerd[1574]: time="2025-11-08T00:25:00.782663974Z" level=info msg="TearDown network for sandbox \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\" successfully" Nov 8 00:25:00.783210 containerd[1574]: time="2025-11-08T00:25:00.782705772Z" level=info msg="StopPodSandbox for \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\" returns successfully" Nov 8 00:25:00.783366 containerd[1574]: time="2025-11-08T00:25:00.783327643Z" level=info msg="RemovePodSandbox for \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\"" Nov 8 00:25:00.786266 containerd[1574]: time="2025-11-08T00:25:00.786225979Z" level=info msg="Forcibly stopping sandbox \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\"" Nov 8 00:25:00.871194 containerd[1574]: 2025-11-08 00:25:00.829 [WARNING][5709] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--z8t5h-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e4d2644c-0b49-478b-8030-eea32781a579", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a734809fe4ca28057f370e89dab2420b766bbe484a82d2155294d90ff18a4582", Pod:"goldmane-666569f655-z8t5h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali04d21bd8d0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:00.871194 containerd[1574]: 2025-11-08 00:25:00.829 [INFO][5709] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Nov 8 00:25:00.871194 containerd[1574]: 2025-11-08 00:25:00.829 [INFO][5709] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" iface="eth0" netns="" Nov 8 00:25:00.871194 containerd[1574]: 2025-11-08 00:25:00.829 [INFO][5709] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Nov 8 00:25:00.871194 containerd[1574]: 2025-11-08 00:25:00.829 [INFO][5709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Nov 8 00:25:00.871194 containerd[1574]: 2025-11-08 00:25:00.858 [INFO][5718] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" HandleID="k8s-pod-network.4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Workload="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:25:00.871194 containerd[1574]: 2025-11-08 00:25:00.858 [INFO][5718] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:00.871194 containerd[1574]: 2025-11-08 00:25:00.858 [INFO][5718] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:00.871194 containerd[1574]: 2025-11-08 00:25:00.863 [WARNING][5718] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" HandleID="k8s-pod-network.4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Workload="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:25:00.871194 containerd[1574]: 2025-11-08 00:25:00.863 [INFO][5718] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" HandleID="k8s-pod-network.4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Workload="localhost-k8s-goldmane--666569f655--z8t5h-eth0" Nov 8 00:25:00.871194 containerd[1574]: 2025-11-08 00:25:00.864 [INFO][5718] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:00.871194 containerd[1574]: 2025-11-08 00:25:00.867 [INFO][5709] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6" Nov 8 00:25:00.871831 containerd[1574]: time="2025-11-08T00:25:00.871242885Z" level=info msg="TearDown network for sandbox \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\" successfully" Nov 8 00:25:00.875679 containerd[1574]: time="2025-11-08T00:25:00.875624469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:25:00.875752 containerd[1574]: time="2025-11-08T00:25:00.875684470Z" level=info msg="RemovePodSandbox \"4cda7c8fbc18a5d016f912159ec85271e92f5b4650c53e6d118d5a2d2ae640c6\" returns successfully" Nov 8 00:25:00.876303 containerd[1574]: time="2025-11-08T00:25:00.876267568Z" level=info msg="StopPodSandbox for \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\"" Nov 8 00:25:00.953402 containerd[1574]: 2025-11-08 00:25:00.912 [WARNING][5735] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0", GenerateName:"calico-kube-controllers-999d4cc44-", Namespace:"calico-system", SelfLink:"", UID:"060a07da-b44f-4d4c-ae28-2a94dae48d16", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"999d4cc44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475", Pod:"calico-kube-controllers-999d4cc44-xrwzd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73fc3435791", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:00.953402 containerd[1574]: 2025-11-08 00:25:00.913 [INFO][5735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Nov 8 00:25:00.953402 containerd[1574]: 2025-11-08 00:25:00.913 [INFO][5735] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" iface="eth0" netns="" Nov 8 00:25:00.953402 containerd[1574]: 2025-11-08 00:25:00.913 [INFO][5735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Nov 8 00:25:00.953402 containerd[1574]: 2025-11-08 00:25:00.913 [INFO][5735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Nov 8 00:25:00.953402 containerd[1574]: 2025-11-08 00:25:00.935 [INFO][5743] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" HandleID="k8s-pod-network.50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Workload="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:25:00.953402 containerd[1574]: 2025-11-08 00:25:00.935 [INFO][5743] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:00.953402 containerd[1574]: 2025-11-08 00:25:00.935 [INFO][5743] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:00.953402 containerd[1574]: 2025-11-08 00:25:00.942 [WARNING][5743] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" HandleID="k8s-pod-network.50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Workload="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:25:00.953402 containerd[1574]: 2025-11-08 00:25:00.942 [INFO][5743] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" HandleID="k8s-pod-network.50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Workload="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:25:00.953402 containerd[1574]: 2025-11-08 00:25:00.944 [INFO][5743] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:00.953402 containerd[1574]: 2025-11-08 00:25:00.949 [INFO][5735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Nov 8 00:25:00.953402 containerd[1574]: time="2025-11-08T00:25:00.953108614Z" level=info msg="TearDown network for sandbox \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\" successfully" Nov 8 00:25:00.953402 containerd[1574]: time="2025-11-08T00:25:00.953147777Z" level=info msg="StopPodSandbox for \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\" returns successfully" Nov 8 00:25:00.955402 containerd[1574]: time="2025-11-08T00:25:00.954798325Z" level=info msg="RemovePodSandbox for \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\"" Nov 8 00:25:00.955402 containerd[1574]: time="2025-11-08T00:25:00.954845033Z" level=info msg="Forcibly stopping sandbox \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\"" Nov 8 00:25:01.045764 containerd[1574]: 2025-11-08 00:25:01.000 [WARNING][5760] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0", GenerateName:"calico-kube-controllers-999d4cc44-", Namespace:"calico-system", SelfLink:"", UID:"060a07da-b44f-4d4c-ae28-2a94dae48d16", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"999d4cc44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4362893da566be1db935e026a040f7ce298e51f7f2617491dcbe78296ad55475", Pod:"calico-kube-controllers-999d4cc44-xrwzd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73fc3435791", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:01.045764 containerd[1574]: 2025-11-08 00:25:01.001 [INFO][5760] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Nov 8 00:25:01.045764 containerd[1574]: 2025-11-08 00:25:01.001 [INFO][5760] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" iface="eth0" netns="" Nov 8 00:25:01.045764 containerd[1574]: 2025-11-08 00:25:01.001 [INFO][5760] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Nov 8 00:25:01.045764 containerd[1574]: 2025-11-08 00:25:01.001 [INFO][5760] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Nov 8 00:25:01.045764 containerd[1574]: 2025-11-08 00:25:01.030 [INFO][5769] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" HandleID="k8s-pod-network.50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Workload="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:25:01.045764 containerd[1574]: 2025-11-08 00:25:01.030 [INFO][5769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:01.045764 containerd[1574]: 2025-11-08 00:25:01.030 [INFO][5769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:01.045764 containerd[1574]: 2025-11-08 00:25:01.037 [WARNING][5769] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" HandleID="k8s-pod-network.50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Workload="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:25:01.045764 containerd[1574]: 2025-11-08 00:25:01.037 [INFO][5769] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" HandleID="k8s-pod-network.50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Workload="localhost-k8s-calico--kube--controllers--999d4cc44--xrwzd-eth0" Nov 8 00:25:01.045764 containerd[1574]: 2025-11-08 00:25:01.038 [INFO][5769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:01.045764 containerd[1574]: 2025-11-08 00:25:01.042 [INFO][5760] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1" Nov 8 00:25:01.046559 containerd[1574]: time="2025-11-08T00:25:01.046483890Z" level=info msg="TearDown network for sandbox \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\" successfully" Nov 8 00:25:01.051886 containerd[1574]: time="2025-11-08T00:25:01.051851075Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:25:01.051953 containerd[1574]: time="2025-11-08T00:25:01.051898955Z" level=info msg="RemovePodSandbox \"50b2aadcecf361a2b9e657cd66bfc798abbaa7e2b6e5bb9f7bfd7348090011f1\" returns successfully" Nov 8 00:25:01.052560 containerd[1574]: time="2025-11-08T00:25:01.052515104Z" level=info msg="StopPodSandbox for \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\"" Nov 8 00:25:01.138404 containerd[1574]: 2025-11-08 00:25:01.092 [WARNING][5786] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0", GenerateName:"calico-apiserver-65fd777b6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"258307e3-fc8b-44da-8b83-06fe3d2024fa", ResourceVersion:"1226", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fd777b6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e", Pod:"calico-apiserver-65fd777b6d-qk5xd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe3e4ebe1a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:01.138404 containerd[1574]: 2025-11-08 00:25:01.092 [INFO][5786] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Nov 8 00:25:01.138404 containerd[1574]: 2025-11-08 00:25:01.092 [INFO][5786] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" iface="eth0" netns="" Nov 8 00:25:01.138404 containerd[1574]: 2025-11-08 00:25:01.093 [INFO][5786] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Nov 8 00:25:01.138404 containerd[1574]: 2025-11-08 00:25:01.093 [INFO][5786] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Nov 8 00:25:01.138404 containerd[1574]: 2025-11-08 00:25:01.119 [INFO][5795] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" HandleID="k8s-pod-network.a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Workload="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:25:01.138404 containerd[1574]: 2025-11-08 00:25:01.119 [INFO][5795] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:01.138404 containerd[1574]: 2025-11-08 00:25:01.119 [INFO][5795] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:01.138404 containerd[1574]: 2025-11-08 00:25:01.130 [WARNING][5795] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" HandleID="k8s-pod-network.a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Workload="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:25:01.138404 containerd[1574]: 2025-11-08 00:25:01.130 [INFO][5795] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" HandleID="k8s-pod-network.a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Workload="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:25:01.138404 containerd[1574]: 2025-11-08 00:25:01.132 [INFO][5795] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:01.138404 containerd[1574]: 2025-11-08 00:25:01.134 [INFO][5786] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Nov 8 00:25:01.138992 containerd[1574]: time="2025-11-08T00:25:01.138458953Z" level=info msg="TearDown network for sandbox \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\" successfully" Nov 8 00:25:01.138992 containerd[1574]: time="2025-11-08T00:25:01.138491464Z" level=info msg="StopPodSandbox for \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\" returns successfully" Nov 8 00:25:01.139114 containerd[1574]: time="2025-11-08T00:25:01.139079121Z" level=info msg="RemovePodSandbox for \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\"" Nov 8 00:25:01.139155 containerd[1574]: time="2025-11-08T00:25:01.139116180Z" level=info msg="Forcibly stopping sandbox \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\"" Nov 8 00:25:01.218835 containerd[1574]: 2025-11-08 00:25:01.176 [WARNING][5813] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0", GenerateName:"calico-apiserver-65fd777b6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"258307e3-fc8b-44da-8b83-06fe3d2024fa", ResourceVersion:"1226", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fd777b6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d023e90196479f009be10b20fcd75683d7a413e5ad912bf849073d995c24285e", Pod:"calico-apiserver-65fd777b6d-qk5xd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe3e4ebe1a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:01.218835 containerd[1574]: 2025-11-08 00:25:01.177 [INFO][5813] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Nov 8 00:25:01.218835 containerd[1574]: 2025-11-08 00:25:01.177 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" iface="eth0" netns="" Nov 8 00:25:01.218835 containerd[1574]: 2025-11-08 00:25:01.177 [INFO][5813] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Nov 8 00:25:01.218835 containerd[1574]: 2025-11-08 00:25:01.177 [INFO][5813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Nov 8 00:25:01.218835 containerd[1574]: 2025-11-08 00:25:01.201 [INFO][5821] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" HandleID="k8s-pod-network.a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Workload="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:25:01.218835 containerd[1574]: 2025-11-08 00:25:01.201 [INFO][5821] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:01.218835 containerd[1574]: 2025-11-08 00:25:01.201 [INFO][5821] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:01.218835 containerd[1574]: 2025-11-08 00:25:01.209 [WARNING][5821] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" HandleID="k8s-pod-network.a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Workload="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:25:01.218835 containerd[1574]: 2025-11-08 00:25:01.209 [INFO][5821] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" HandleID="k8s-pod-network.a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Workload="localhost-k8s-calico--apiserver--65fd777b6d--qk5xd-eth0" Nov 8 00:25:01.218835 containerd[1574]: 2025-11-08 00:25:01.211 [INFO][5821] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:01.218835 containerd[1574]: 2025-11-08 00:25:01.215 [INFO][5813] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b" Nov 8 00:25:01.218835 containerd[1574]: time="2025-11-08T00:25:01.218805277Z" level=info msg="TearDown network for sandbox \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\" successfully" Nov 8 00:25:01.227818 containerd[1574]: time="2025-11-08T00:25:01.227759195Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:25:01.227922 containerd[1574]: time="2025-11-08T00:25:01.227832713Z" level=info msg="RemovePodSandbox \"a7b5459ea63f1661a23be9e7577e7c35e6a898961e76296614897ddc403fd83b\" returns successfully" Nov 8 00:25:01.228517 containerd[1574]: time="2025-11-08T00:25:01.228477396Z" level=info msg="StopPodSandbox for \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\"" Nov 8 00:25:01.306205 containerd[1574]: 2025-11-08 00:25:01.269 [WARNING][5840] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"10e98c86-5971-470b-a7fe-1df841b99600", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136", Pod:"coredns-668d6bf9bc-rzrnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e4d9005998", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:01.306205 containerd[1574]: 2025-11-08 00:25:01.269 [INFO][5840] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Nov 8 00:25:01.306205 containerd[1574]: 2025-11-08 00:25:01.269 [INFO][5840] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" iface="eth0" netns="" Nov 8 00:25:01.306205 containerd[1574]: 2025-11-08 00:25:01.269 [INFO][5840] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Nov 8 00:25:01.306205 containerd[1574]: 2025-11-08 00:25:01.269 [INFO][5840] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Nov 8 00:25:01.306205 containerd[1574]: 2025-11-08 00:25:01.291 [INFO][5849] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" HandleID="k8s-pod-network.f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Workload="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:25:01.306205 containerd[1574]: 2025-11-08 00:25:01.291 [INFO][5849] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:01.306205 containerd[1574]: 2025-11-08 00:25:01.291 [INFO][5849] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:01.306205 containerd[1574]: 2025-11-08 00:25:01.298 [WARNING][5849] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" HandleID="k8s-pod-network.f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Workload="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:25:01.306205 containerd[1574]: 2025-11-08 00:25:01.298 [INFO][5849] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" HandleID="k8s-pod-network.f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Workload="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:25:01.306205 containerd[1574]: 2025-11-08 00:25:01.299 [INFO][5849] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:01.306205 containerd[1574]: 2025-11-08 00:25:01.303 [INFO][5840] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Nov 8 00:25:01.306778 containerd[1574]: time="2025-11-08T00:25:01.306241873Z" level=info msg="TearDown network for sandbox \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\" successfully" Nov 8 00:25:01.306778 containerd[1574]: time="2025-11-08T00:25:01.306274294Z" level=info msg="StopPodSandbox for \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\" returns successfully" Nov 8 00:25:01.306970 containerd[1574]: time="2025-11-08T00:25:01.306941939Z" level=info msg="RemovePodSandbox for \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\"" Nov 8 00:25:01.307008 containerd[1574]: time="2025-11-08T00:25:01.306979529Z" level=info msg="Forcibly stopping sandbox \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\"" Nov 8 00:25:01.391794 containerd[1574]: 2025-11-08 00:25:01.350 [WARNING][5867] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"10e98c86-5971-470b-a7fe-1df841b99600", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4499c0a53fffe906c13b174e89cb746b2b3b17b53195dc068b813fcd9a3b6136", Pod:"coredns-668d6bf9bc-rzrnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e4d9005998", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:01.391794 containerd[1574]: 2025-11-08 00:25:01.350 [INFO][5867] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Nov 8 00:25:01.391794 containerd[1574]: 2025-11-08 00:25:01.350 [INFO][5867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" iface="eth0" netns="" Nov 8 00:25:01.391794 containerd[1574]: 2025-11-08 00:25:01.350 [INFO][5867] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Nov 8 00:25:01.391794 containerd[1574]: 2025-11-08 00:25:01.350 [INFO][5867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Nov 8 00:25:01.391794 containerd[1574]: 2025-11-08 00:25:01.375 [INFO][5876] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" HandleID="k8s-pod-network.f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Workload="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:25:01.391794 containerd[1574]: 2025-11-08 00:25:01.375 [INFO][5876] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:01.391794 containerd[1574]: 2025-11-08 00:25:01.375 [INFO][5876] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:01.391794 containerd[1574]: 2025-11-08 00:25:01.382 [WARNING][5876] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" HandleID="k8s-pod-network.f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Workload="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:25:01.391794 containerd[1574]: 2025-11-08 00:25:01.382 [INFO][5876] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" HandleID="k8s-pod-network.f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Workload="localhost-k8s-coredns--668d6bf9bc--rzrnf-eth0" Nov 8 00:25:01.391794 containerd[1574]: 2025-11-08 00:25:01.384 [INFO][5876] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:01.391794 containerd[1574]: 2025-11-08 00:25:01.388 [INFO][5867] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838" Nov 8 00:25:01.392392 containerd[1574]: time="2025-11-08T00:25:01.391826552Z" level=info msg="TearDown network for sandbox \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\" successfully" Nov 8 00:25:01.396793 containerd[1574]: time="2025-11-08T00:25:01.396742535Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:25:01.396878 containerd[1574]: time="2025-11-08T00:25:01.396806524Z" level=info msg="RemovePodSandbox \"f20264fa4a7330144317265960c69de01bee97bf01ee0770a677a3980dc9f838\" returns successfully" Nov 8 00:25:01.397559 containerd[1574]: time="2025-11-08T00:25:01.397509977Z" level=info msg="StopPodSandbox for \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\"" Nov 8 00:25:01.474618 containerd[1574]: 2025-11-08 00:25:01.436 [WARNING][5894] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0", GenerateName:"calico-apiserver-54f8bbf6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a83aa0bc-f007-4d1f-95cf-997e1c8ab851", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f8bbf6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c", Pod:"calico-apiserver-54f8bbf6f-szk2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali340381e6781", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:01.474618 containerd[1574]: 2025-11-08 00:25:01.437 [INFO][5894] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Nov 8 00:25:01.474618 containerd[1574]: 2025-11-08 00:25:01.437 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" iface="eth0" netns="" Nov 8 00:25:01.474618 containerd[1574]: 2025-11-08 00:25:01.437 [INFO][5894] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Nov 8 00:25:01.474618 containerd[1574]: 2025-11-08 00:25:01.437 [INFO][5894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Nov 8 00:25:01.474618 containerd[1574]: 2025-11-08 00:25:01.460 [INFO][5903] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" HandleID="k8s-pod-network.3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:25:01.474618 containerd[1574]: 2025-11-08 00:25:01.461 [INFO][5903] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:01.474618 containerd[1574]: 2025-11-08 00:25:01.461 [INFO][5903] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:01.474618 containerd[1574]: 2025-11-08 00:25:01.467 [WARNING][5903] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" HandleID="k8s-pod-network.3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:25:01.474618 containerd[1574]: 2025-11-08 00:25:01.467 [INFO][5903] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" HandleID="k8s-pod-network.3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:25:01.474618 containerd[1574]: 2025-11-08 00:25:01.468 [INFO][5903] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:01.474618 containerd[1574]: 2025-11-08 00:25:01.471 [INFO][5894] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Nov 8 00:25:01.474618 containerd[1574]: time="2025-11-08T00:25:01.474594856Z" level=info msg="TearDown network for sandbox \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\" successfully" Nov 8 00:25:01.475536 containerd[1574]: time="2025-11-08T00:25:01.474631194Z" level=info msg="StopPodSandbox for \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\" returns successfully" Nov 8 00:25:01.475536 containerd[1574]: time="2025-11-08T00:25:01.475300734Z" level=info msg="RemovePodSandbox for \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\"" Nov 8 00:25:01.475536 containerd[1574]: time="2025-11-08T00:25:01.475336590Z" level=info msg="Forcibly stopping sandbox \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\"" Nov 8 00:25:01.553214 containerd[1574]: 2025-11-08 00:25:01.515 [WARNING][5921] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0", GenerateName:"calico-apiserver-54f8bbf6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a83aa0bc-f007-4d1f-95cf-997e1c8ab851", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f8bbf6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab270de6c01f9cd4ede8279154f9126fa266d2ff568574d1917cecd4df5b926c", Pod:"calico-apiserver-54f8bbf6f-szk2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali340381e6781", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:01.553214 containerd[1574]: 2025-11-08 00:25:01.516 [INFO][5921] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Nov 8 00:25:01.553214 containerd[1574]: 2025-11-08 00:25:01.516 [INFO][5921] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" iface="eth0" netns="" Nov 8 00:25:01.553214 containerd[1574]: 2025-11-08 00:25:01.516 [INFO][5921] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Nov 8 00:25:01.553214 containerd[1574]: 2025-11-08 00:25:01.516 [INFO][5921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Nov 8 00:25:01.553214 containerd[1574]: 2025-11-08 00:25:01.537 [INFO][5930] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" HandleID="k8s-pod-network.3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:25:01.553214 containerd[1574]: 2025-11-08 00:25:01.537 [INFO][5930] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:01.553214 containerd[1574]: 2025-11-08 00:25:01.537 [INFO][5930] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:01.553214 containerd[1574]: 2025-11-08 00:25:01.544 [WARNING][5930] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" HandleID="k8s-pod-network.3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:25:01.553214 containerd[1574]: 2025-11-08 00:25:01.544 [INFO][5930] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" HandleID="k8s-pod-network.3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--szk2c-eth0" Nov 8 00:25:01.553214 containerd[1574]: 2025-11-08 00:25:01.546 [INFO][5930] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:01.553214 containerd[1574]: 2025-11-08 00:25:01.549 [INFO][5921] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d" Nov 8 00:25:01.553796 containerd[1574]: time="2025-11-08T00:25:01.553270423Z" level=info msg="TearDown network for sandbox \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\" successfully" Nov 8 00:25:01.563807 containerd[1574]: time="2025-11-08T00:25:01.563761939Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:25:01.563872 containerd[1574]: time="2025-11-08T00:25:01.563825619Z" level=info msg="RemovePodSandbox \"3febc2a92e65b0789362416458647b3e96d9b9c8d61e818e2090ea0a3fb84c8d\" returns successfully" Nov 8 00:25:01.564542 containerd[1574]: time="2025-11-08T00:25:01.564502030Z" level=info msg="StopPodSandbox for \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\"" Nov 8 00:25:01.643357 containerd[1574]: 2025-11-08 00:25:01.602 [WARNING][5949] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" WorkloadEndpoint="localhost-k8s-whisker--886f776dc--cd7z8-eth0" Nov 8 00:25:01.643357 containerd[1574]: 2025-11-08 00:25:01.602 [INFO][5949] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Nov 8 00:25:01.643357 containerd[1574]: 2025-11-08 00:25:01.602 [INFO][5949] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" iface="eth0" netns="" Nov 8 00:25:01.643357 containerd[1574]: 2025-11-08 00:25:01.602 [INFO][5949] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Nov 8 00:25:01.643357 containerd[1574]: 2025-11-08 00:25:01.602 [INFO][5949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Nov 8 00:25:01.643357 containerd[1574]: 2025-11-08 00:25:01.626 [INFO][5958] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" HandleID="k8s-pod-network.a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Workload="localhost-k8s-whisker--886f776dc--cd7z8-eth0" Nov 8 00:25:01.643357 containerd[1574]: 2025-11-08 00:25:01.627 [INFO][5958] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:01.643357 containerd[1574]: 2025-11-08 00:25:01.627 [INFO][5958] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:01.643357 containerd[1574]: 2025-11-08 00:25:01.635 [WARNING][5958] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" HandleID="k8s-pod-network.a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Workload="localhost-k8s-whisker--886f776dc--cd7z8-eth0" Nov 8 00:25:01.643357 containerd[1574]: 2025-11-08 00:25:01.635 [INFO][5958] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" HandleID="k8s-pod-network.a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Workload="localhost-k8s-whisker--886f776dc--cd7z8-eth0" Nov 8 00:25:01.643357 containerd[1574]: 2025-11-08 00:25:01.636 [INFO][5958] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:01.643357 containerd[1574]: 2025-11-08 00:25:01.640 [INFO][5949] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Nov 8 00:25:01.643888 containerd[1574]: time="2025-11-08T00:25:01.643448446Z" level=info msg="TearDown network for sandbox \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\" successfully" Nov 8 00:25:01.643888 containerd[1574]: time="2025-11-08T00:25:01.643497310Z" level=info msg="StopPodSandbox for \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\" returns successfully" Nov 8 00:25:01.644245 containerd[1574]: time="2025-11-08T00:25:01.644210055Z" level=info msg="RemovePodSandbox for \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\"" Nov 8 00:25:01.644302 containerd[1574]: time="2025-11-08T00:25:01.644252376Z" level=info msg="Forcibly stopping sandbox \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\"" Nov 8 00:25:01.721859 containerd[1574]: time="2025-11-08T00:25:01.721814140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:25:01.731870 containerd[1574]: 2025-11-08 00:25:01.690 [WARNING][5975] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" WorkloadEndpoint="localhost-k8s-whisker--886f776dc--cd7z8-eth0" Nov 8 00:25:01.731870 containerd[1574]: 2025-11-08 00:25:01.690 [INFO][5975] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Nov 8 00:25:01.731870 containerd[1574]: 2025-11-08 00:25:01.690 [INFO][5975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" iface="eth0" netns="" Nov 8 00:25:01.731870 containerd[1574]: 2025-11-08 00:25:01.690 [INFO][5975] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Nov 8 00:25:01.731870 containerd[1574]: 2025-11-08 00:25:01.690 [INFO][5975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Nov 8 00:25:01.731870 containerd[1574]: 2025-11-08 00:25:01.712 [INFO][5984] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" HandleID="k8s-pod-network.a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Workload="localhost-k8s-whisker--886f776dc--cd7z8-eth0" Nov 8 00:25:01.731870 containerd[1574]: 2025-11-08 00:25:01.712 [INFO][5984] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:01.731870 containerd[1574]: 2025-11-08 00:25:01.712 [INFO][5984] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:01.731870 containerd[1574]: 2025-11-08 00:25:01.719 [WARNING][5984] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" HandleID="k8s-pod-network.a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Workload="localhost-k8s-whisker--886f776dc--cd7z8-eth0" Nov 8 00:25:01.731870 containerd[1574]: 2025-11-08 00:25:01.719 [INFO][5984] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" HandleID="k8s-pod-network.a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Workload="localhost-k8s-whisker--886f776dc--cd7z8-eth0" Nov 8 00:25:01.731870 containerd[1574]: 2025-11-08 00:25:01.722 [INFO][5984] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:01.731870 containerd[1574]: 2025-11-08 00:25:01.727 [INFO][5975] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e" Nov 8 00:25:01.732220 containerd[1574]: time="2025-11-08T00:25:01.731861692Z" level=info msg="TearDown network for sandbox \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\" successfully" Nov 8 00:25:01.738101 containerd[1574]: time="2025-11-08T00:25:01.738038110Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:25:01.738101 containerd[1574]: time="2025-11-08T00:25:01.738113153Z" level=info msg="RemovePodSandbox \"a1d6ea2ce64bbe497a48c7a6127f57f1eede0b7f2a53304de82ab325f875326e\" returns successfully" Nov 8 00:25:01.738704 containerd[1574]: time="2025-11-08T00:25:01.738650445Z" level=info msg="StopPodSandbox for \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\"" Nov 8 00:25:01.815094 containerd[1574]: 2025-11-08 00:25:01.778 [WARNING][6002] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0", GenerateName:"calico-apiserver-54f8bbf6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d10b4e8-1a4b-40ad-b663-a53c60424a45", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f8bbf6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a", Pod:"calico-apiserver-54f8bbf6f-npzst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fc83c98ea4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:01.815094 containerd[1574]: 2025-11-08 00:25:01.778 [INFO][6002] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Nov 8 00:25:01.815094 containerd[1574]: 2025-11-08 00:25:01.778 [INFO][6002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" iface="eth0" netns="" Nov 8 00:25:01.815094 containerd[1574]: 2025-11-08 00:25:01.778 [INFO][6002] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Nov 8 00:25:01.815094 containerd[1574]: 2025-11-08 00:25:01.778 [INFO][6002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Nov 8 00:25:01.815094 containerd[1574]: 2025-11-08 00:25:01.800 [INFO][6010] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" HandleID="k8s-pod-network.39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:25:01.815094 containerd[1574]: 2025-11-08 00:25:01.800 [INFO][6010] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:01.815094 containerd[1574]: 2025-11-08 00:25:01.800 [INFO][6010] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:01.815094 containerd[1574]: 2025-11-08 00:25:01.807 [WARNING][6010] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" HandleID="k8s-pod-network.39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:25:01.815094 containerd[1574]: 2025-11-08 00:25:01.807 [INFO][6010] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" HandleID="k8s-pod-network.39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:25:01.815094 containerd[1574]: 2025-11-08 00:25:01.808 [INFO][6010] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:01.815094 containerd[1574]: 2025-11-08 00:25:01.812 [INFO][6002] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Nov 8 00:25:01.815594 containerd[1574]: time="2025-11-08T00:25:01.815145881Z" level=info msg="TearDown network for sandbox \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\" successfully" Nov 8 00:25:01.815594 containerd[1574]: time="2025-11-08T00:25:01.815177942Z" level=info msg="StopPodSandbox for \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\" returns successfully" Nov 8 00:25:01.815914 containerd[1574]: time="2025-11-08T00:25:01.815865148Z" level=info msg="RemovePodSandbox for \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\"" Nov 8 00:25:01.815914 containerd[1574]: time="2025-11-08T00:25:01.815914383Z" level=info msg="Forcibly stopping sandbox \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\"" Nov 8 00:25:01.894861 containerd[1574]: 2025-11-08 00:25:01.854 [WARNING][6029] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0", GenerateName:"calico-apiserver-54f8bbf6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d10b4e8-1a4b-40ad-b663-a53c60424a45", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f8bbf6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49f79cf1311c1861f0b482eb91ce751d530adb3632f1d795ec1632f038ee891a", Pod:"calico-apiserver-54f8bbf6f-npzst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fc83c98ea4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:01.894861 containerd[1574]: 2025-11-08 00:25:01.854 [INFO][6029] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Nov 8 00:25:01.894861 containerd[1574]: 2025-11-08 00:25:01.854 [INFO][6029] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" iface="eth0" netns="" Nov 8 00:25:01.894861 containerd[1574]: 2025-11-08 00:25:01.854 [INFO][6029] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Nov 8 00:25:01.894861 containerd[1574]: 2025-11-08 00:25:01.854 [INFO][6029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Nov 8 00:25:01.894861 containerd[1574]: 2025-11-08 00:25:01.880 [INFO][6038] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" HandleID="k8s-pod-network.39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:25:01.894861 containerd[1574]: 2025-11-08 00:25:01.881 [INFO][6038] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:01.894861 containerd[1574]: 2025-11-08 00:25:01.881 [INFO][6038] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:01.894861 containerd[1574]: 2025-11-08 00:25:01.887 [WARNING][6038] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" HandleID="k8s-pod-network.39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:25:01.894861 containerd[1574]: 2025-11-08 00:25:01.887 [INFO][6038] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" HandleID="k8s-pod-network.39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Workload="localhost-k8s-calico--apiserver--54f8bbf6f--npzst-eth0" Nov 8 00:25:01.894861 containerd[1574]: 2025-11-08 00:25:01.888 [INFO][6038] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:01.894861 containerd[1574]: 2025-11-08 00:25:01.892 [INFO][6029] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea" Nov 8 00:25:01.895640 containerd[1574]: time="2025-11-08T00:25:01.894916496Z" level=info msg="TearDown network for sandbox \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\" successfully" Nov 8 00:25:01.899189 containerd[1574]: time="2025-11-08T00:25:01.899161921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:25:01.899247 containerd[1574]: time="2025-11-08T00:25:01.899209282Z" level=info msg="RemovePodSandbox \"39c1f82a266c4eccc4d4437f3d7d6145310b7f9582c3cadab08dd8c9daaef2ea\" returns successfully" Nov 8 00:25:01.899888 containerd[1574]: time="2025-11-08T00:25:01.899832216Z" level=info msg="StopPodSandbox for \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\"" Nov 8 00:25:01.980045 containerd[1574]: 2025-11-08 00:25:01.939 [WARNING][6056] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5mbvz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9805d816-e7c8-479d-9360-d3b3efa64586", ResourceVersion:"1243", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b", Pod:"csi-node-driver-5mbvz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie3b8347fa91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:01.980045 containerd[1574]: 2025-11-08 00:25:01.940 [INFO][6056] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Nov 8 00:25:01.980045 containerd[1574]: 2025-11-08 00:25:01.940 [INFO][6056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" iface="eth0" netns="" Nov 8 00:25:01.980045 containerd[1574]: 2025-11-08 00:25:01.940 [INFO][6056] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Nov 8 00:25:01.980045 containerd[1574]: 2025-11-08 00:25:01.940 [INFO][6056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Nov 8 00:25:01.980045 containerd[1574]: 2025-11-08 00:25:01.964 [INFO][6065] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" HandleID="k8s-pod-network.55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Workload="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:25:01.980045 containerd[1574]: 2025-11-08 00:25:01.964 [INFO][6065] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:01.980045 containerd[1574]: 2025-11-08 00:25:01.964 [INFO][6065] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:01.980045 containerd[1574]: 2025-11-08 00:25:01.971 [WARNING][6065] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" HandleID="k8s-pod-network.55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Workload="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:25:01.980045 containerd[1574]: 2025-11-08 00:25:01.971 [INFO][6065] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" HandleID="k8s-pod-network.55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Workload="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:25:01.980045 containerd[1574]: 2025-11-08 00:25:01.973 [INFO][6065] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:01.980045 containerd[1574]: 2025-11-08 00:25:01.976 [INFO][6056] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Nov 8 00:25:01.980045 containerd[1574]: time="2025-11-08T00:25:01.980054582Z" level=info msg="TearDown network for sandbox \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\" successfully" Nov 8 00:25:01.980730 containerd[1574]: time="2025-11-08T00:25:01.980079909Z" level=info msg="StopPodSandbox for \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\" returns successfully" Nov 8 00:25:01.980730 containerd[1574]: time="2025-11-08T00:25:01.980329895Z" level=info msg="RemovePodSandbox for \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\"" Nov 8 00:25:01.980730 containerd[1574]: time="2025-11-08T00:25:01.980353891Z" level=info msg="Forcibly stopping sandbox \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\"" Nov 8 00:25:02.058508 containerd[1574]: 2025-11-08 00:25:02.017 [WARNING][6084] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5mbvz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9805d816-e7c8-479d-9360-d3b3efa64586", ResourceVersion:"1243", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da4a40a02e82f56dc18b849495e7cd244078fb0b56b3e3374e2f3f8d5fd35e7b", Pod:"csi-node-driver-5mbvz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie3b8347fa91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:02.058508 containerd[1574]: 2025-11-08 00:25:02.018 [INFO][6084] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Nov 8 00:25:02.058508 containerd[1574]: 2025-11-08 00:25:02.018 [INFO][6084] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" iface="eth0" netns="" Nov 8 00:25:02.058508 containerd[1574]: 2025-11-08 00:25:02.018 [INFO][6084] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Nov 8 00:25:02.058508 containerd[1574]: 2025-11-08 00:25:02.018 [INFO][6084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Nov 8 00:25:02.058508 containerd[1574]: 2025-11-08 00:25:02.045 [INFO][6094] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" HandleID="k8s-pod-network.55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Workload="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:25:02.058508 containerd[1574]: 2025-11-08 00:25:02.045 [INFO][6094] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:02.058508 containerd[1574]: 2025-11-08 00:25:02.045 [INFO][6094] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:02.058508 containerd[1574]: 2025-11-08 00:25:02.050 [WARNING][6094] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" HandleID="k8s-pod-network.55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Workload="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:25:02.058508 containerd[1574]: 2025-11-08 00:25:02.050 [INFO][6094] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" HandleID="k8s-pod-network.55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Workload="localhost-k8s-csi--node--driver--5mbvz-eth0" Nov 8 00:25:02.058508 containerd[1574]: 2025-11-08 00:25:02.052 [INFO][6094] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:02.058508 containerd[1574]: 2025-11-08 00:25:02.055 [INFO][6084] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85" Nov 8 00:25:02.058508 containerd[1574]: time="2025-11-08T00:25:02.058479103Z" level=info msg="TearDown network for sandbox \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\" successfully" Nov 8 00:25:02.061600 containerd[1574]: time="2025-11-08T00:25:02.061567660Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:25:02.062934 containerd[1574]: time="2025-11-08T00:25:02.062857279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:25:02.062934 containerd[1574]: time="2025-11-08T00:25:02.062886743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:25:02.063204 kubelet[2659]: E1108 00:25:02.063135 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:25:02.063668 kubelet[2659]: E1108 00:25:02.063215 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:25:02.063668 kubelet[2659]: E1108 00:25:02.063379 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmh8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54f8bbf6f-szk2c_calico-apiserver(a83aa0bc-f007-4d1f-95cf-997e1c8ab851): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:25:02.063834 containerd[1574]: time="2025-11-08T00:25:02.063801687Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:25:02.063923 containerd[1574]: time="2025-11-08T00:25:02.063905726Z" level=info msg="RemovePodSandbox \"55632d010fc2b32c1d24c15721c36739597dc577946e878c991cd7eea0d77f85\" returns successfully" Nov 8 00:25:02.064642 containerd[1574]: time="2025-11-08T00:25:02.064322142Z" level=info msg="StopPodSandbox for \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\"" Nov 8 00:25:02.065544 kubelet[2659]: E1108 00:25:02.065497 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-szk2c" podUID="a83aa0bc-f007-4d1f-95cf-997e1c8ab851" Nov 8 00:25:02.139488 containerd[1574]: 2025-11-08 00:25:02.103 [WARNING][6112] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kckcx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003", Pod:"coredns-668d6bf9bc-kckcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29e1f61b725", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:02.139488 containerd[1574]: 2025-11-08 00:25:02.103 [INFO][6112] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Nov 8 00:25:02.139488 containerd[1574]: 2025-11-08 00:25:02.103 [INFO][6112] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" iface="eth0" netns="" Nov 8 00:25:02.139488 containerd[1574]: 2025-11-08 00:25:02.103 [INFO][6112] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Nov 8 00:25:02.139488 containerd[1574]: 2025-11-08 00:25:02.103 [INFO][6112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Nov 8 00:25:02.139488 containerd[1574]: 2025-11-08 00:25:02.124 [INFO][6120] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" HandleID="k8s-pod-network.a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Workload="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:25:02.139488 containerd[1574]: 2025-11-08 00:25:02.124 [INFO][6120] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:02.139488 containerd[1574]: 2025-11-08 00:25:02.124 [INFO][6120] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:02.139488 containerd[1574]: 2025-11-08 00:25:02.130 [WARNING][6120] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" HandleID="k8s-pod-network.a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Workload="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:25:02.139488 containerd[1574]: 2025-11-08 00:25:02.130 [INFO][6120] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" HandleID="k8s-pod-network.a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Workload="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:25:02.139488 containerd[1574]: 2025-11-08 00:25:02.133 [INFO][6120] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:02.139488 containerd[1574]: 2025-11-08 00:25:02.135 [INFO][6112] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Nov 8 00:25:02.140351 containerd[1574]: time="2025-11-08T00:25:02.139497027Z" level=info msg="TearDown network for sandbox \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\" successfully" Nov 8 00:25:02.140351 containerd[1574]: time="2025-11-08T00:25:02.139530921Z" level=info msg="StopPodSandbox for \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\" returns successfully" Nov 8 00:25:02.140351 containerd[1574]: time="2025-11-08T00:25:02.140065748Z" level=info msg="RemovePodSandbox for \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\"" Nov 8 00:25:02.140351 containerd[1574]: time="2025-11-08T00:25:02.140095183Z" level=info msg="Forcibly stopping sandbox \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\"" Nov 8 00:25:02.228675 containerd[1574]: 2025-11-08 00:25:02.180 [WARNING][6138] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kckcx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"21bcda2d-b0ec-4df6-8fa2-ae8cac068bf9", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a39c7f8ac0c8b1cb2ede4991b14df2828259798511a275b1e041cfd0f3d71003", Pod:"coredns-668d6bf9bc-kckcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29e1f61b725", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:25:02.228675 containerd[1574]: 2025-11-08 00:25:02.180 [INFO][6138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Nov 8 00:25:02.228675 containerd[1574]: 2025-11-08 00:25:02.180 [INFO][6138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" iface="eth0" netns="" Nov 8 00:25:02.228675 containerd[1574]: 2025-11-08 00:25:02.180 [INFO][6138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Nov 8 00:25:02.228675 containerd[1574]: 2025-11-08 00:25:02.180 [INFO][6138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Nov 8 00:25:02.228675 containerd[1574]: 2025-11-08 00:25:02.210 [INFO][6147] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" HandleID="k8s-pod-network.a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Workload="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:25:02.228675 containerd[1574]: 2025-11-08 00:25:02.211 [INFO][6147] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:25:02.228675 containerd[1574]: 2025-11-08 00:25:02.211 [INFO][6147] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:25:02.228675 containerd[1574]: 2025-11-08 00:25:02.219 [WARNING][6147] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" HandleID="k8s-pod-network.a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Workload="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:25:02.228675 containerd[1574]: 2025-11-08 00:25:02.219 [INFO][6147] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" HandleID="k8s-pod-network.a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Workload="localhost-k8s-coredns--668d6bf9bc--kckcx-eth0" Nov 8 00:25:02.228675 containerd[1574]: 2025-11-08 00:25:02.221 [INFO][6147] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:25:02.228675 containerd[1574]: 2025-11-08 00:25:02.225 [INFO][6138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078" Nov 8 00:25:02.228675 containerd[1574]: time="2025-11-08T00:25:02.228648270Z" level=info msg="TearDown network for sandbox \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\" successfully" Nov 8 00:25:02.233032 containerd[1574]: time="2025-11-08T00:25:02.233000458Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:25:02.233100 containerd[1574]: time="2025-11-08T00:25:02.233055159Z" level=info msg="RemovePodSandbox \"a43b458140125a1b7d66c6166cf36ce668a0d1f1fc8b25af72f7b1ebe109e078\" returns successfully" Nov 8 00:25:02.723715 kubelet[2659]: E1108 00:25:02.723401 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85bb64496c-96z92" podUID="497820bc-a22f-4ed0-899b-b37a4c4036b5" Nov 8 00:25:04.722740 kubelet[2659]: E1108 00:25:04.722530 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-999d4cc44-xrwzd" podUID="060a07da-b44f-4d4c-ae28-2a94dae48d16" Nov 8 00:25:05.320872 systemd[1]: Started sshd@14-10.0.0.93:22-10.0.0.1:54910.service - OpenSSH per-connection server daemon (10.0.0.1:54910). Nov 8 00:25:05.362940 sshd[6159]: Accepted publickey for core from 10.0.0.1 port 54910 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:25:05.365550 sshd[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:05.371156 systemd-logind[1557]: New session 15 of user core. Nov 8 00:25:05.381990 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:25:05.520519 sshd[6159]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:05.525454 systemd[1]: sshd@14-10.0.0.93:22-10.0.0.1:54910.service: Deactivated successfully. Nov 8 00:25:05.528980 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:25:05.529808 systemd-logind[1557]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:25:05.530752 systemd-logind[1557]: Removed session 15. Nov 8 00:25:08.721996 kubelet[2659]: E1108 00:25:08.721766 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fd777b6d-qk5xd" podUID="258307e3-fc8b-44da-8b83-06fe3d2024fa" Nov 8 00:25:10.532767 systemd[1]: Started sshd@15-10.0.0.93:22-10.0.0.1:40074.service - OpenSSH per-connection server daemon (10.0.0.1:40074). Nov 8 00:25:10.563065 sshd[6207]: Accepted publickey for core from 10.0.0.1 port 40074 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:25:10.565107 sshd[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:10.570076 systemd-logind[1557]: New session 16 of user core. Nov 8 00:25:10.583969 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:25:10.705586 sshd[6207]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:10.710033 systemd[1]: sshd@15-10.0.0.93:22-10.0.0.1:40074.service: Deactivated successfully. Nov 8 00:25:10.712911 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:25:10.713058 systemd-logind[1557]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:25:10.714319 systemd-logind[1557]: Removed session 16. Nov 8 00:25:10.722018 kubelet[2659]: E1108 00:25:10.721915 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t5h" podUID="e4d2644c-0b49-478b-8030-eea32781a579" Nov 8 00:25:12.721599 kubelet[2659]: E1108 00:25:12.721452 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:25:12.723923 kubelet[2659]: E1108 00:25:12.723863 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5mbvz" podUID="9805d816-e7c8-479d-9360-d3b3efa64586" Nov 8 00:25:13.722192 kubelet[2659]: E1108 00:25:13.722108 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-npzst" podUID="9d10b4e8-1a4b-40ad-b663-a53c60424a45" Nov 8 00:25:15.721689 kubelet[2659]: E1108 00:25:15.721639 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-szk2c" podUID="a83aa0bc-f007-4d1f-95cf-997e1c8ab851" Nov 8 00:25:15.722968 containerd[1574]: time="2025-11-08T00:25:15.722259792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:25:15.722801 systemd[1]: Started sshd@16-10.0.0.93:22-10.0.0.1:40076.service - OpenSSH per-connection server daemon (10.0.0.1:40076). Nov 8 00:25:15.759084 sshd[6222]: Accepted publickey for core from 10.0.0.1 port 40076 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:25:15.761199 sshd[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:15.766332 systemd-logind[1557]: New session 17 of user core. Nov 8 00:25:15.775859 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:25:15.910987 sshd[6222]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:15.915416 systemd[1]: sshd@16-10.0.0.93:22-10.0.0.1:40076.service: Deactivated successfully. Nov 8 00:25:15.919713 systemd-logind[1557]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:25:15.921062 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:25:15.922208 systemd-logind[1557]: Removed session 17. Nov 8 00:25:16.089171 containerd[1574]: time="2025-11-08T00:25:16.089113126Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:25:16.090682 containerd[1574]: time="2025-11-08T00:25:16.090597839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:25:16.090750 containerd[1574]: time="2025-11-08T00:25:16.090680543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:25:16.090975 kubelet[2659]: E1108 00:25:16.090925 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:25:16.091059 kubelet[2659]: E1108 00:25:16.090991 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:25:16.091233 kubelet[2659]: E1108 00:25:16.091157 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndblm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-999d4cc44-xrwzd_calico-system(060a07da-b44f-4d4c-ae28-2a94dae48d16): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:25:16.092440 kubelet[2659]: E1108 00:25:16.092382 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-999d4cc44-xrwzd" podUID="060a07da-b44f-4d4c-ae28-2a94dae48d16" Nov 8 00:25:17.721393 kubelet[2659]: E1108 00:25:17.721341 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:25:17.722691 containerd[1574]: time="2025-11-08T00:25:17.722636666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:25:18.208772 containerd[1574]: time="2025-11-08T00:25:18.208705502Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:25:18.321577 containerd[1574]: time="2025-11-08T00:25:18.321480125Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:25:18.321719 containerd[1574]: time="2025-11-08T00:25:18.321492420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:25:18.321870 kubelet[2659]: E1108 00:25:18.321817 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:25:18.321948 kubelet[2659]: E1108 00:25:18.321883 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:25:18.322044 kubelet[2659]: E1108 00:25:18.322017 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:168e7d9989574572b8f93af229b1409b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85wsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85bb64496c-96z92_calico-system(497820bc-a22f-4ed0-899b-b37a4c4036b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:25:18.323975 containerd[1574]: time="2025-11-08T00:25:18.323929814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:25:18.781647 containerd[1574]: time="2025-11-08T00:25:18.781575017Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:25:18.857015 containerd[1574]: time="2025-11-08T00:25:18.856948195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:25:18.857015 containerd[1574]: time="2025-11-08T00:25:18.856993400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:25:18.857253 kubelet[2659]: E1108 00:25:18.857215 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:25:18.857694 kubelet[2659]: E1108 00:25:18.857264 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:25:18.857694 kubelet[2659]: E1108 00:25:18.857379 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85wsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85bb64496c-96z92_calico-system(497820bc-a22f-4ed0-899b-b37a4c4036b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:25:18.858686 kubelet[2659]: E1108 00:25:18.858615 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85bb64496c-96z92" podUID="497820bc-a22f-4ed0-899b-b37a4c4036b5" Nov 8 00:25:20.921842 systemd[1]: Started sshd@17-10.0.0.93:22-10.0.0.1:35120.service - OpenSSH per-connection server daemon (10.0.0.1:35120). Nov 8 00:25:20.960557 sshd[6237]: Accepted publickey for core from 10.0.0.1 port 35120 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:25:20.963698 sshd[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:20.973754 systemd-logind[1557]: New session 18 of user core. Nov 8 00:25:20.981610 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:25:21.159815 sshd[6237]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:21.168053 systemd[1]: Started sshd@18-10.0.0.93:22-10.0.0.1:35126.service - OpenSSH per-connection server daemon (10.0.0.1:35126). Nov 8 00:25:21.168905 systemd[1]: sshd@17-10.0.0.93:22-10.0.0.1:35120.service: Deactivated successfully. Nov 8 00:25:21.175857 systemd-logind[1557]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:25:21.176077 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:25:21.177948 systemd-logind[1557]: Removed session 18. Nov 8 00:25:21.207961 sshd[6249]: Accepted publickey for core from 10.0.0.1 port 35126 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:25:21.210171 sshd[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:21.216660 systemd-logind[1557]: New session 19 of user core. Nov 8 00:25:21.225061 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:25:21.695304 sshd[6249]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:21.702912 systemd[1]: Started sshd@19-10.0.0.93:22-10.0.0.1:35134.service - OpenSSH per-connection server daemon (10.0.0.1:35134). Nov 8 00:25:21.703699 systemd[1]: sshd@18-10.0.0.93:22-10.0.0.1:35126.service: Deactivated successfully. Nov 8 00:25:21.711665 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:25:21.714656 systemd-logind[1557]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:25:21.716218 systemd-logind[1557]: Removed session 19. Nov 8 00:25:21.726957 containerd[1574]: time="2025-11-08T00:25:21.724708302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:25:21.753960 sshd[6263]: Accepted publickey for core from 10.0.0.1 port 35134 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:25:21.754665 sshd[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:21.761213 systemd-logind[1557]: New session 20 of user core. Nov 8 00:25:21.767919 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:25:22.074950 containerd[1574]: time="2025-11-08T00:25:22.074468961Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:25:22.078437 containerd[1574]: time="2025-11-08T00:25:22.078100459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:25:22.078437 containerd[1574]: time="2025-11-08T00:25:22.078204704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:25:22.078573 kubelet[2659]: E1108 00:25:22.078481 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:25:22.078573 kubelet[2659]: E1108 00:25:22.078562 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:25:22.079240 kubelet[2659]: E1108 00:25:22.079037 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvmp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z8t5h_calico-system(e4d2644c-0b49-478b-8030-eea32781a579): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:25:22.080166 kubelet[2659]: E1108 00:25:22.080140 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t5h" podUID="e4d2644c-0b49-478b-8030-eea32781a579" Nov 8 00:25:22.584072 sshd[6263]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:22.589884 systemd[1]: Started sshd@20-10.0.0.93:22-10.0.0.1:35148.service - OpenSSH per-connection server daemon (10.0.0.1:35148). Nov 8 00:25:22.591859 systemd[1]: sshd@19-10.0.0.93:22-10.0.0.1:35134.service: Deactivated successfully. Nov 8 00:25:22.604414 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:25:22.610551 systemd-logind[1557]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:25:22.612261 systemd-logind[1557]: Removed session 20. Nov 8 00:25:22.638771 sshd[6287]: Accepted publickey for core from 10.0.0.1 port 35148 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:25:22.640639 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:22.645945 systemd-logind[1557]: New session 21 of user core. Nov 8 00:25:22.654748 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:25:22.923045 sshd[6287]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:22.933328 systemd[1]: Started sshd@21-10.0.0.93:22-10.0.0.1:35156.service - OpenSSH per-connection server daemon (10.0.0.1:35156). Nov 8 00:25:22.934321 systemd[1]: sshd@20-10.0.0.93:22-10.0.0.1:35148.service: Deactivated successfully. Nov 8 00:25:22.939081 systemd-logind[1557]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:25:22.939600 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:25:22.941188 systemd-logind[1557]: Removed session 21. Nov 8 00:25:22.964783 sshd[6301]: Accepted publickey for core from 10.0.0.1 port 35156 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:25:22.966631 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:22.971593 systemd-logind[1557]: New session 22 of user core. Nov 8 00:25:22.981766 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:25:23.111078 sshd[6301]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:23.116242 systemd[1]: sshd@21-10.0.0.93:22-10.0.0.1:35156.service: Deactivated successfully. Nov 8 00:25:23.119791 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:25:23.120673 systemd-logind[1557]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:25:23.121712 systemd-logind[1557]: Removed session 22. Nov 8 00:25:23.722389 containerd[1574]: time="2025-11-08T00:25:23.722251727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:25:24.125848 containerd[1574]: time="2025-11-08T00:25:24.125777614Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:25:24.152095 containerd[1574]: time="2025-11-08T00:25:24.151997556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:25:24.152166 containerd[1574]: time="2025-11-08T00:25:24.152081097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:25:24.152528 kubelet[2659]: E1108 00:25:24.152461 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:25:24.152994 kubelet[2659]: E1108 00:25:24.152538 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:25:24.152994 kubelet[2659]: E1108 00:25:24.152918 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7df28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65fd777b6d-qk5xd_calico-apiserver(258307e3-fc8b-44da-8b83-06fe3d2024fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:25:24.153353 containerd[1574]: time="2025-11-08T00:25:24.153283298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:25:24.154835 kubelet[2659]: E1108 00:25:24.154780 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fd777b6d-qk5xd" podUID="258307e3-fc8b-44da-8b83-06fe3d2024fa" Nov 8 00:25:24.510098 containerd[1574]: time="2025-11-08T00:25:24.509911806Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:25:24.511375 containerd[1574]: time="2025-11-08T00:25:24.511244575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:25:24.511375 containerd[1574]: time="2025-11-08T00:25:24.511329320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:25:24.511663 kubelet[2659]: E1108 00:25:24.511604 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:25:24.511713 kubelet[2659]: E1108 00:25:24.511674 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:25:24.511875 kubelet[2659]: E1108 00:25:24.511818 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjcwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5mbvz_calico-system(9805d816-e7c8-479d-9360-d3b3efa64586): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:25:24.514459 containerd[1574]: time="2025-11-08T00:25:24.514403769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:25:25.281803 containerd[1574]: time="2025-11-08T00:25:25.281700094Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:25:25.283236 containerd[1574]: time="2025-11-08T00:25:25.283161211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:25:25.283236 containerd[1574]: time="2025-11-08T00:25:25.283192956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:25:25.283544 kubelet[2659]: E1108 00:25:25.283474 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:25:25.284012 kubelet[2659]: E1108 00:25:25.283553 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:25:25.284012 kubelet[2659]: E1108 00:25:25.283705 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjcwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5mbvz_calico-system(9805d816-e7c8-479d-9360-d3b3efa64586): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:25:25.285004 kubelet[2659]: E1108 00:25:25.284933 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5mbvz" podUID="9805d816-e7c8-479d-9360-d3b3efa64586" Nov 8 00:25:26.723272 kubelet[2659]: E1108 00:25:26.723227 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-999d4cc44-xrwzd" podUID="060a07da-b44f-4d4c-ae28-2a94dae48d16" Nov 8 00:25:26.724502 containerd[1574]: time="2025-11-08T00:25:26.724200294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:25:27.165196 containerd[1574]: time="2025-11-08T00:25:27.165131928Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:25:27.203299 containerd[1574]: time="2025-11-08T00:25:27.203244797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:25:27.203394 containerd[1574]: time="2025-11-08T00:25:27.203221539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:25:27.203615 kubelet[2659]: E1108 00:25:27.203550 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:25:27.203672 kubelet[2659]: E1108 00:25:27.203615 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:25:27.203804 kubelet[2659]: E1108 00:25:27.203767 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lz982,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54f8bbf6f-npzst_calico-apiserver(9d10b4e8-1a4b-40ad-b663-a53c60424a45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:25:27.205083 kubelet[2659]: E1108 00:25:27.205050 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-npzst" podUID="9d10b4e8-1a4b-40ad-b663-a53c60424a45" Nov 8 00:25:27.720938 kubelet[2659]: E1108 00:25:27.720892 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:25:28.126889 systemd[1]: Started sshd@22-10.0.0.93:22-10.0.0.1:40248.service - OpenSSH per-connection server daemon (10.0.0.1:40248). Nov 8 00:25:28.157081 sshd[6327]: Accepted publickey for core from 10.0.0.1 port 40248 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:25:28.158908 sshd[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:28.163726 systemd-logind[1557]: New session 23 of user core. Nov 8 00:25:28.170832 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:25:28.311913 sshd[6327]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:28.316042 systemd[1]: sshd@22-10.0.0.93:22-10.0.0.1:40248.service: Deactivated successfully. Nov 8 00:25:28.318828 systemd-logind[1557]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:25:28.318897 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:25:28.320244 systemd-logind[1557]: Removed session 23. Nov 8 00:25:29.722415 containerd[1574]: time="2025-11-08T00:25:29.722281652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:25:30.143634 containerd[1574]: time="2025-11-08T00:25:30.143575529Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:25:30.145560 containerd[1574]: time="2025-11-08T00:25:30.145486620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:25:30.145766 containerd[1574]: time="2025-11-08T00:25:30.145541801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:25:30.145868 kubelet[2659]: E1108 00:25:30.145799 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:25:30.146267 kubelet[2659]: E1108 00:25:30.145881 2659 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:25:30.146267 kubelet[2659]: E1108 00:25:30.146044 2659 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmh8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54f8bbf6f-szk2c_calico-apiserver(a83aa0bc-f007-4d1f-95cf-997e1c8ab851): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:25:30.147520 kubelet[2659]: E1108 00:25:30.147475 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-szk2c" podUID="a83aa0bc-f007-4d1f-95cf-997e1c8ab851" Nov 8 00:25:31.722181 kubelet[2659]: E1108 00:25:31.722112 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85bb64496c-96z92" podUID="497820bc-a22f-4ed0-899b-b37a4c4036b5" Nov 8 00:25:33.332768 systemd[1]: Started sshd@23-10.0.0.93:22-10.0.0.1:40260.service - OpenSSH per-connection server daemon (10.0.0.1:40260). Nov 8 00:25:33.366759 sshd[6344]: Accepted publickey for core from 10.0.0.1 port 40260 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:25:33.368513 sshd[6344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:33.373308 systemd-logind[1557]: New session 24 of user core. Nov 8 00:25:33.382734 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:25:33.502007 sshd[6344]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:33.507860 systemd[1]: sshd@23-10.0.0.93:22-10.0.0.1:40260.service: Deactivated successfully. Nov 8 00:25:33.510542 systemd-logind[1557]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:25:33.510718 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:25:33.512453 systemd-logind[1557]: Removed session 24. Nov 8 00:25:35.722765 kubelet[2659]: E1108 00:25:35.722695 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5mbvz" podUID="9805d816-e7c8-479d-9360-d3b3efa64586" Nov 8 00:25:36.721563 kubelet[2659]: E1108 00:25:36.721399 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t5h" podUID="e4d2644c-0b49-478b-8030-eea32781a579" Nov 8 00:25:36.721911 kubelet[2659]: E1108 00:25:36.721880 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fd777b6d-qk5xd" podUID="258307e3-fc8b-44da-8b83-06fe3d2024fa" Nov 8 00:25:37.720369 kubelet[2659]: E1108 00:25:37.720322 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:25:38.099244 kubelet[2659]: E1108 00:25:38.099209 2659 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:25:38.513682 systemd[1]: Started sshd@24-10.0.0.93:22-10.0.0.1:32892.service - OpenSSH per-connection server daemon (10.0.0.1:32892). Nov 8 00:25:38.545996 sshd[6384]: Accepted publickey for core from 10.0.0.1 port 32892 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:25:38.547757 sshd[6384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:38.552027 systemd-logind[1557]: New session 25 of user core. Nov 8 00:25:38.565718 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:25:38.683102 sshd[6384]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:38.688068 systemd[1]: sshd@24-10.0.0.93:22-10.0.0.1:32892.service: Deactivated successfully. Nov 8 00:25:38.691203 systemd-logind[1557]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:25:38.691353 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:25:38.692338 systemd-logind[1557]: Removed session 25. Nov 8 00:25:38.724172 kubelet[2659]: E1108 00:25:38.724118 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-999d4cc44-xrwzd" podUID="060a07da-b44f-4d4c-ae28-2a94dae48d16" Nov 8 00:25:39.721906 kubelet[2659]: E1108 00:25:39.721851 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-npzst" podUID="9d10b4e8-1a4b-40ad-b663-a53c60424a45" Nov 8 00:25:41.721082 kubelet[2659]: E1108 00:25:41.721023 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f8bbf6f-szk2c" podUID="a83aa0bc-f007-4d1f-95cf-997e1c8ab851" Nov 8 00:25:43.697718 systemd[1]: Started sshd@25-10.0.0.93:22-10.0.0.1:32894.service - OpenSSH per-connection server daemon (10.0.0.1:32894). Nov 8 00:25:43.733009 sshd[6399]: Accepted publickey for core from 10.0.0.1 port 32894 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:25:43.735233 sshd[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:43.739930 systemd-logind[1557]: New session 26 of user core. Nov 8 00:25:43.746840 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:25:43.875282 sshd[6399]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:43.879972 systemd[1]: sshd@25-10.0.0.93:22-10.0.0.1:32894.service: Deactivated successfully. Nov 8 00:25:43.882763 systemd-logind[1557]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:25:43.883709 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:25:43.884777 systemd-logind[1557]: Removed session 26. Nov 8 00:25:44.722447 kubelet[2659]: E1108 00:25:44.722377 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85bb64496c-96z92" podUID="497820bc-a22f-4ed0-899b-b37a4c4036b5"