Nov 6 00:29:35.365671 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:11:41 -00 2025 Nov 6 00:29:35.365701 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:29:35.365717 kernel: BIOS-provided physical RAM map: Nov 6 00:29:35.365727 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 6 00:29:35.365737 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 6 00:29:35.365747 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 6 00:29:35.365758 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 6 00:29:35.365769 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 6 00:29:35.365783 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 6 00:29:35.365796 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 6 00:29:35.365806 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 00:29:35.365816 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 6 00:29:35.365826 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 00:29:35.365836 kernel: NX (Execute Disable) protection: active Nov 6 00:29:35.365851 kernel: APIC: Static calls initialized Nov 6 00:29:35.365861 kernel: SMBIOS 2.8 present. Nov 6 00:29:35.365876 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 6 00:29:35.365887 kernel: DMI: Memory slots populated: 1/1 Nov 6 00:29:35.365897 kernel: Hypervisor detected: KVM Nov 6 00:29:35.365908 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 6 00:29:35.365919 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 00:29:35.365930 kernel: kvm-clock: using sched offset of 4986644218 cycles Nov 6 00:29:35.365941 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 00:29:35.365956 kernel: tsc: Detected 2794.748 MHz processor Nov 6 00:29:35.365968 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:29:35.365979 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:29:35.365991 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 6 00:29:35.366002 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 6 00:29:35.366013 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:29:35.366024 kernel: Using GB pages for direct mapping Nov 6 00:29:35.366036 kernel: ACPI: Early table checksum verification disabled Nov 6 00:29:35.366050 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 6 00:29:35.366061 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:29:35.366073 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:29:35.366100 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:29:35.366111 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 6 00:29:35.366123 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:29:35.366134 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:29:35.366150 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:29:35.366161 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:29:35.366177 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 6 00:29:35.366189 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 6 00:29:35.366201 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 6 00:29:35.366215 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 6 00:29:35.366227 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 6 00:29:35.366238 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 6 00:29:35.366250 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 6 00:29:35.366261 kernel: No NUMA configuration found Nov 6 00:29:35.366273 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 6 00:29:35.366288 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 6 00:29:35.366300 kernel: Zone ranges: Nov 6 00:29:35.366312 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:29:35.366323 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 6 00:29:35.366335 kernel: Normal empty Nov 6 00:29:35.366346 kernel: Device empty Nov 6 00:29:35.366358 kernel: Movable zone start for each node Nov 6 00:29:35.366370 kernel: Early memory node ranges Nov 6 00:29:35.366383 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 6 00:29:35.366394 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 6 00:29:35.366405 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 6 00:29:35.366416 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:29:35.366427 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 00:29:35.366438 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 6 00:29:35.366453 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 00:29:35.366468 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 00:29:35.366479 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:29:35.366490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 00:29:35.366504 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 00:29:35.366526 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:29:35.366536 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 00:29:35.366547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 00:29:35.366561 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:29:35.366572 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 00:29:35.366584 kernel: TSC deadline timer available Nov 6 00:29:35.366595 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:29:35.366607 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:29:35.366618 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:29:35.366630 kernel: CPU topo: Max. threads per core: 1 Nov 6 00:29:35.366642 kernel: CPU topo: Num. cores per package: 4 Nov 6 00:29:35.366656 kernel: CPU topo: Num. threads per package: 4 Nov 6 00:29:35.366668 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 6 00:29:35.366679 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 00:29:35.366691 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 6 00:29:35.366703 kernel: kvm-guest: setup PV sched yield Nov 6 00:29:35.366715 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 6 00:29:35.366726 kernel: Booting paravirtualized kernel on KVM Nov 6 00:29:35.366741 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:29:35.366753 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 6 00:29:35.366765 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 6 00:29:35.366777 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 6 00:29:35.366788 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 6 00:29:35.366800 kernel: kvm-guest: PV spinlocks enabled Nov 6 00:29:35.366811 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 00:29:35.366825 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:29:35.366840 kernel: random: crng init done Nov 6 00:29:35.366851 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 00:29:35.366863 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 00:29:35.366875 kernel: Fallback order for Node 0: 0 Nov 6 00:29:35.366887 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 6 00:29:35.366898 kernel: Policy zone: DMA32 Nov 6 00:29:35.366911 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:29:35.366922 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 6 00:29:35.366932 kernel: ftrace: allocating 40092 entries in 157 pages Nov 6 00:29:35.366943 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:29:35.366953 kernel: Dynamic Preempt: voluntary Nov 6 00:29:35.366963 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:29:35.366975 kernel: rcu: RCU event tracing is enabled. Nov 6 00:29:35.366989 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 6 00:29:35.367000 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:29:35.367015 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:29:35.367025 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:29:35.367035 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:29:35.367046 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 6 00:29:35.367057 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:29:35.367068 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:29:35.367096 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:29:35.367107 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 6 00:29:35.367118 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:29:35.367140 kernel: Console: colour VGA+ 80x25 Nov 6 00:29:35.367155 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:29:35.367167 kernel: ACPI: Core revision 20240827 Nov 6 00:29:35.367179 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 00:29:35.367192 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:29:35.367204 kernel: x2apic enabled Nov 6 00:29:35.367219 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:29:35.367235 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 6 00:29:35.367248 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 6 00:29:35.367260 kernel: kvm-guest: setup PV IPIs Nov 6 00:29:35.367275 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 00:29:35.367288 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 6 00:29:35.367300 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 6 00:29:35.367313 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 00:29:35.367325 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 6 00:29:35.367337 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 6 00:29:35.367350 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:29:35.367365 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 00:29:35.367377 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:29:35.367389 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 6 00:29:35.367402 kernel: active return thunk: retbleed_return_thunk Nov 6 00:29:35.367414 kernel: RETBleed: Mitigation: untrained return thunk Nov 6 00:29:35.367429 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 00:29:35.367443 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 00:29:35.367460 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 6 00:29:35.367473 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 6 00:29:35.367485 kernel: active return thunk: srso_return_thunk Nov 6 00:29:35.367498 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 6 00:29:35.367524 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:29:35.367537 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:29:35.367549 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:29:35.367565 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:29:35.367578 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 6 00:29:35.367590 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:29:35.367603 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:29:35.367615 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:29:35.367627 kernel: landlock: Up and running. Nov 6 00:29:35.367639 kernel: SELinux: Initializing. Nov 6 00:29:35.367658 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 00:29:35.367669 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 00:29:35.367681 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 6 00:29:35.367691 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 6 00:29:35.367702 kernel: ... version: 0 Nov 6 00:29:35.367713 kernel: ... bit width: 48 Nov 6 00:29:35.367724 kernel: ... generic registers: 6 Nov 6 00:29:35.367740 kernel: ... value mask: 0000ffffffffffff Nov 6 00:29:35.367756 kernel: ... max period: 00007fffffffffff Nov 6 00:29:35.367769 kernel: ... fixed-purpose events: 0 Nov 6 00:29:35.367781 kernel: ... event mask: 000000000000003f Nov 6 00:29:35.367793 kernel: signal: max sigframe size: 1776 Nov 6 00:29:35.367807 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:29:35.367821 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:29:35.367836 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:29:35.367848 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:29:35.367860 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:29:35.367872 kernel: .... node #0, CPUs: #1 #2 #3 Nov 6 00:29:35.367885 kernel: smp: Brought up 1 node, 4 CPUs Nov 6 00:29:35.367897 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 6 00:29:35.367910 kernel: Memory: 2451436K/2571752K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 114376K reserved, 0K cma-reserved) Nov 6 00:29:35.367925 kernel: devtmpfs: initialized Nov 6 00:29:35.367938 kernel: x86/mm: Memory block size: 128MB Nov 6 00:29:35.367950 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:29:35.367962 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 6 00:29:35.367975 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:29:35.367987 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:29:35.367999 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:29:35.368014 kernel: audit: type=2000 audit(1762388972.693:1): state=initialized audit_enabled=0 res=1 Nov 6 00:29:35.368026 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:29:35.368039 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:29:35.368051 kernel: cpuidle: using governor menu Nov 6 00:29:35.368063 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:29:35.368075 kernel: dca service started, version 1.12.1 Nov 6 00:29:35.368107 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 6 00:29:35.368123 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 6 00:29:35.368135 kernel: PCI: Using configuration type 1 for base access Nov 6 00:29:35.368148 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:29:35.368160 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 00:29:35.368173 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 00:29:35.368185 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:29:35.368198 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:29:35.368213 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:29:35.368225 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:29:35.368237 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:29:35.368249 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 00:29:35.368261 kernel: ACPI: Interpreter enabled Nov 6 00:29:35.368273 kernel: ACPI: PM: (supports S0 S3 S5) Nov 6 00:29:35.368285 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:29:35.368298 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:29:35.368313 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 00:29:35.368325 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 6 00:29:35.368337 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 00:29:35.368660 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 00:29:35.368881 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 6 00:29:35.369118 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 6 00:29:35.369135 kernel: PCI host bridge to bus 0000:00 Nov 6 00:29:35.369349 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 00:29:35.369564 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 00:29:35.369761 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 00:29:35.369958 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 6 00:29:35.370182 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 6 00:29:35.370379 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 6 00:29:35.370585 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 00:29:35.370820 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 6 00:29:35.371046 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 6 00:29:35.371286 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 6 00:29:35.371506 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 6 00:29:35.371729 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 6 00:29:35.371938 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 00:29:35.372190 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 6 00:29:35.372427 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 6 00:29:35.372664 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 6 00:29:35.372874 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 6 00:29:35.373115 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 6 00:29:35.373333 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 6 00:29:35.373577 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 6 00:29:35.373799 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 6 00:29:35.374036 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 00:29:35.374278 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 6 00:29:35.374564 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 6 00:29:35.374777 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 6 00:29:35.374986 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 6 00:29:35.375225 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 6 00:29:35.375447 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 6 00:29:35.375689 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 6 00:29:35.375900 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 6 00:29:35.376128 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 6 00:29:35.376357 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 6 00:29:35.376582 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 6 00:29:35.376598 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 00:29:35.376610 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 00:29:35.376622 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 00:29:35.376634 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 00:29:35.376646 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 6 00:29:35.376657 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 6 00:29:35.376673 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 6 00:29:35.376685 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 6 00:29:35.376696 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 6 00:29:35.376707 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 6 00:29:35.376719 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 6 00:29:35.376730 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 6 00:29:35.376742 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 6 00:29:35.376757 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 6 00:29:35.376768 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 6 00:29:35.376780 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 6 00:29:35.376792 kernel: iommu: Default domain type: Translated Nov 6 00:29:35.376803 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:29:35.376814 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:29:35.376826 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 00:29:35.376840 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 6 00:29:35.376851 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 6 00:29:35.377056 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 6 00:29:35.377286 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 6 00:29:35.377494 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 00:29:35.377509 kernel: vgaarb: loaded Nov 6 00:29:35.377530 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 00:29:35.377547 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 00:29:35.377559 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 00:29:35.377570 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:29:35.377582 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:29:35.377593 kernel: pnp: PnP ACPI init Nov 6 00:29:35.377813 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 6 00:29:35.377833 kernel: pnp: PnP ACPI: found 6 devices Nov 6 00:29:35.377845 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:29:35.377857 kernel: NET: Registered PF_INET protocol family Nov 6 00:29:35.377868 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 00:29:35.377880 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 6 00:29:35.377892 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:29:35.377904 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 00:29:35.377918 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 6 00:29:35.377930 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 6 00:29:35.377942 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 00:29:35.377954 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 00:29:35.377965 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:29:35.377977 kernel: NET: Registered PF_XDP protocol family Nov 6 00:29:35.378187 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 00:29:35.378382 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 00:29:35.378590 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 00:29:35.378782 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 6 00:29:35.378971 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 6 00:29:35.379178 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 6 00:29:35.379194 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:29:35.379206 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 6 00:29:35.379222 kernel: Initialise system trusted keyrings Nov 6 00:29:35.379234 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 6 00:29:35.379246 kernel: Key type asymmetric registered Nov 6 00:29:35.379257 kernel: Asymmetric key parser 'x509' registered Nov 6 00:29:35.379269 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:29:35.379281 kernel: io scheduler mq-deadline registered Nov 6 00:29:35.379293 kernel: io scheduler kyber registered Nov 6 00:29:35.379307 kernel: io scheduler bfq registered Nov 6 00:29:35.379319 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:29:35.379331 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 6 00:29:35.379343 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 6 00:29:35.379355 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 6 00:29:35.379366 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:29:35.379378 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:29:35.379392 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 00:29:35.379403 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 00:29:35.379415 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 00:29:35.379637 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 6 00:29:35.379654 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 00:29:35.379849 kernel: rtc_cmos 00:04: registered as rtc0 Nov 6 00:29:35.380047 kernel: rtc_cmos 00:04: setting system clock to 2025-11-06T00:29:33 UTC (1762388973) Nov 6 00:29:35.380270 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 6 00:29:35.380287 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 6 00:29:35.380299 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:29:35.380311 kernel: Segment Routing with IPv6 Nov 6 00:29:35.380323 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:29:35.380336 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:29:35.380353 kernel: Key type dns_resolver registered Nov 6 00:29:35.380364 kernel: IPI shorthand broadcast: enabled Nov 6 00:29:35.380376 kernel: sched_clock: Marking stable (1270005190, 263079577)->(1598137127, -65052360) Nov 6 00:29:35.380388 kernel: registered taskstats version 1 Nov 6 00:29:35.380400 kernel: Loading compiled-in X.509 certificates Nov 6 00:29:35.380411 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 92154d1aa04a8c1424f65981683e67110e07d121' Nov 6 00:29:35.380423 kernel: Demotion targets for Node 0: null Nov 6 00:29:35.380434 kernel: Key type .fscrypt registered Nov 6 00:29:35.380449 kernel: Key type fscrypt-provisioning registered Nov 6 00:29:35.380460 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:29:35.380472 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:29:35.380483 kernel: ima: No architecture policies found Nov 6 00:29:35.380495 kernel: clk: Disabling unused clocks Nov 6 00:29:35.380506 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 6 00:29:35.380530 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:29:35.380545 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 6 00:29:35.380557 kernel: Run /init as init process Nov 6 00:29:35.380568 kernel: with arguments: Nov 6 00:29:35.380580 kernel: /init Nov 6 00:29:35.380591 kernel: with environment: Nov 6 00:29:35.380602 kernel: HOME=/ Nov 6 00:29:35.380614 kernel: TERM=linux Nov 6 00:29:35.380627 kernel: SCSI subsystem initialized Nov 6 00:29:35.380639 kernel: libata version 3.00 loaded. Nov 6 00:29:35.380855 kernel: ahci 0000:00:1f.2: version 3.0 Nov 6 00:29:35.380894 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 6 00:29:35.381132 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 6 00:29:35.381332 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 6 00:29:35.381549 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 6 00:29:35.381775 kernel: scsi host0: ahci Nov 6 00:29:35.381984 kernel: scsi host1: ahci Nov 6 00:29:35.382211 kernel: scsi host2: ahci Nov 6 00:29:35.382457 kernel: scsi host3: ahci Nov 6 00:29:35.382683 kernel: scsi host4: ahci Nov 6 00:29:35.382917 kernel: scsi host5: ahci Nov 6 00:29:35.382934 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Nov 6 00:29:35.382945 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Nov 6 00:29:35.382957 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Nov 6 00:29:35.382968 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Nov 6 00:29:35.382980 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Nov 6 00:29:35.382995 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Nov 6 00:29:35.383006 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 6 00:29:35.383017 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 6 00:29:35.383028 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 6 00:29:35.383039 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 6 00:29:35.383050 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 6 00:29:35.383061 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 6 00:29:35.383074 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 00:29:35.383104 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 6 00:29:35.383115 kernel: ata3.00: applying bridge limits Nov 6 00:29:35.383126 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 00:29:35.383137 kernel: ata3.00: configured for UDMA/100 Nov 6 00:29:35.383370 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 6 00:29:35.383597 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 6 00:29:35.383796 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 6 00:29:35.383810 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 00:29:35.383821 kernel: GPT:16515071 != 27000831 Nov 6 00:29:35.383832 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 00:29:35.383843 kernel: GPT:16515071 != 27000831 Nov 6 00:29:35.383854 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 00:29:35.383868 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:29:35.384078 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 6 00:29:35.384109 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 00:29:35.384325 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 6 00:29:35.384340 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:29:35.384351 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:29:35.384362 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:29:35.384378 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 6 00:29:35.384392 kernel: raid6: avx2x4 gen() 24814 MB/s Nov 6 00:29:35.384404 kernel: raid6: avx2x2 gen() 24123 MB/s Nov 6 00:29:35.384416 kernel: raid6: avx2x1 gen() 20195 MB/s Nov 6 00:29:35.384430 kernel: raid6: using algorithm avx2x4 gen() 24814 MB/s Nov 6 00:29:35.384443 kernel: raid6: .... xor() 6394 MB/s, rmw enabled Nov 6 00:29:35.384455 kernel: raid6: using avx2x2 recovery algorithm Nov 6 00:29:35.384467 kernel: xor: automatically using best checksumming function avx Nov 6 00:29:35.384479 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:29:35.384491 kernel: BTRFS: device fsid 4dd99ff0-78f7-441c-acc1-7ff3d924a9b4 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (182) Nov 6 00:29:35.384503 kernel: BTRFS info (device dm-0): first mount of filesystem 4dd99ff0-78f7-441c-acc1-7ff3d924a9b4 Nov 6 00:29:35.384530 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:29:35.384541 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:29:35.384553 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:29:35.384565 kernel: loop: module loaded Nov 6 00:29:35.384578 kernel: loop0: detected capacity change from 0 to 100120 Nov 6 00:29:35.384591 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:29:35.384606 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:29:35.384626 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:29:35.384641 systemd[1]: Detected virtualization kvm. Nov 6 00:29:35.384654 systemd[1]: Detected architecture x86-64. Nov 6 00:29:35.384667 systemd[1]: Running in initrd. Nov 6 00:29:35.384680 systemd[1]: No hostname configured, using default hostname. Nov 6 00:29:35.384697 systemd[1]: Hostname set to . Nov 6 00:29:35.384713 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 6 00:29:35.384726 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:29:35.384740 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:29:35.384754 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:29:35.384767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:29:35.384782 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:29:35.384799 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:29:35.384813 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:29:35.384827 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:29:35.384841 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:29:35.384854 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:29:35.384868 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:29:35.384884 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:29:35.384898 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:29:35.384912 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:29:35.384925 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:29:35.384939 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:29:35.384953 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:29:35.384967 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:29:35.384983 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:29:35.384997 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:29:35.385011 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:29:35.385024 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:29:35.385038 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:29:35.385052 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:29:35.385068 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:29:35.385082 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:29:35.385110 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:29:35.385124 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:29:35.385138 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:29:35.385151 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:29:35.385165 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:29:35.385182 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:29:35.385196 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:29:35.385210 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:29:35.385224 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:29:35.385241 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:29:35.385287 systemd-journald[317]: Collecting audit messages is disabled. Nov 6 00:29:35.385318 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:29:35.385336 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:29:35.385349 systemd-journald[317]: Journal started Nov 6 00:29:35.385375 systemd-journald[317]: Runtime Journal (/run/log/journal/4f3a1de440d0410bbfd6d804ffcd999b) is 6M, max 48.3M, 42.2M free. Nov 6 00:29:35.390106 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:29:35.395125 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:29:35.398417 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:29:35.403898 systemd-modules-load[320]: Inserted module 'br_netfilter' Nov 6 00:29:35.404856 kernel: Bridge firewalling registered Nov 6 00:29:35.405247 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:29:35.410265 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:29:35.414449 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:29:35.426287 systemd-tmpfiles[336]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:29:35.502522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:29:35.506615 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:29:35.513637 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:29:35.531465 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:29:35.535374 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:29:35.555729 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:29:35.562441 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:29:35.606398 dracut-cmdline[361]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:29:35.616532 systemd-resolved[349]: Positive Trust Anchors: Nov 6 00:29:35.616552 systemd-resolved[349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:29:35.616557 systemd-resolved[349]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 6 00:29:35.616599 systemd-resolved[349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:29:35.648585 systemd-resolved[349]: Defaulting to hostname 'linux'. Nov 6 00:29:35.650020 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:29:35.652462 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:29:35.751123 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:29:35.767117 kernel: iscsi: registered transport (tcp) Nov 6 00:29:35.792171 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:29:35.792251 kernel: QLogic iSCSI HBA Driver Nov 6 00:29:35.825608 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:29:35.852436 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:29:35.856377 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:29:35.922377 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:29:35.925809 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:29:35.929194 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:29:35.975640 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:29:35.980759 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:29:36.021921 systemd-udevd[595]: Using default interface naming scheme 'v257'. Nov 6 00:29:36.042449 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:29:36.046602 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:29:36.080582 dracut-pre-trigger[663]: rd.md=0: removing MD RAID activation Nov 6 00:29:36.091317 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:29:36.094676 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:29:36.131634 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:29:36.135428 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:29:36.161058 systemd-networkd[714]: lo: Link UP Nov 6 00:29:36.161068 systemd-networkd[714]: lo: Gained carrier Nov 6 00:29:36.161829 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:29:36.164644 systemd[1]: Reached target network.target - Network. Nov 6 00:29:36.240405 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:29:36.244681 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:29:36.312181 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 00:29:36.333837 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 00:29:36.350666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:29:36.356436 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 6 00:29:36.361126 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:29:36.364759 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 00:29:36.372273 systemd-networkd[714]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:29:36.372289 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:29:36.380250 kernel: AES CTR mode by8 optimization enabled Nov 6 00:29:36.372973 systemd-networkd[714]: eth0: Link UP Nov 6 00:29:36.373451 systemd-networkd[714]: eth0: Gained carrier Nov 6 00:29:36.373464 systemd-networkd[714]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:29:36.378307 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:29:36.383452 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:29:36.383681 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:29:36.385689 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:29:36.404178 systemd-networkd[714]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 00:29:36.407838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:29:36.421839 disk-uuid[824]: Primary Header is updated. Nov 6 00:29:36.421839 disk-uuid[824]: Secondary Entries is updated. Nov 6 00:29:36.421839 disk-uuid[824]: Secondary Header is updated. Nov 6 00:29:36.497852 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:29:36.533272 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:29:36.557393 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:29:36.559669 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:29:36.562435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:29:36.569154 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:29:36.607317 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:29:37.474245 disk-uuid[838]: Warning: The kernel is still using the old partition table. Nov 6 00:29:37.474245 disk-uuid[838]: The new table will be used at the next reboot or after you Nov 6 00:29:37.474245 disk-uuid[838]: run partprobe(8) or kpartx(8) Nov 6 00:29:37.474245 disk-uuid[838]: The operation has completed successfully. Nov 6 00:29:37.491112 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:29:37.491348 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:29:37.494476 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:29:37.561145 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Nov 6 00:29:37.566385 kernel: BTRFS info (device vda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:29:37.566465 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:29:37.571811 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:29:37.571914 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:29:37.583321 kernel: BTRFS info (device vda6): last unmount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:29:37.584425 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:29:37.589900 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:29:38.271453 systemd-networkd[714]: eth0: Gained IPv6LL Nov 6 00:29:39.083166 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1777413309 wd_nsec: 1777412724 Nov 6 00:29:39.536903 ignition[884]: Ignition 2.22.0 Nov 6 00:29:39.536929 ignition[884]: Stage: fetch-offline Nov 6 00:29:39.537016 ignition[884]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:29:39.537037 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:29:39.537254 ignition[884]: parsed url from cmdline: "" Nov 6 00:29:39.537261 ignition[884]: no config URL provided Nov 6 00:29:39.537273 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:29:39.537293 ignition[884]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:29:39.537372 ignition[884]: op(1): [started] loading QEMU firmware config module Nov 6 00:29:39.537380 ignition[884]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 6 00:29:39.557127 ignition[884]: op(1): [finished] loading QEMU firmware config module Nov 6 00:29:39.643144 ignition[884]: parsing config with SHA512: 1654894fc5f0809c357c12b8c5bb04c89fdc983b201415c465e1da2526c66ad0dbe6b5ef8ad98bae072ff73940633807a3814a9eb17ebf04be9910f911fc0125 Nov 6 00:29:39.654675 unknown[884]: fetched base config from "system" Nov 6 00:29:39.654695 unknown[884]: fetched user config from "qemu" Nov 6 00:29:39.655485 ignition[884]: fetch-offline: fetch-offline passed Nov 6 00:29:39.655609 ignition[884]: Ignition finished successfully Nov 6 00:29:39.663813 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:29:39.664872 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 6 00:29:39.666023 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:29:39.734290 ignition[896]: Ignition 2.22.0 Nov 6 00:29:39.734304 ignition[896]: Stage: kargs Nov 6 00:29:39.734481 ignition[896]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:29:39.734493 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:29:39.735240 ignition[896]: kargs: kargs passed Nov 6 00:29:39.735288 ignition[896]: Ignition finished successfully Nov 6 00:29:39.746603 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:29:39.749143 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:29:39.803369 ignition[904]: Ignition 2.22.0 Nov 6 00:29:39.803391 ignition[904]: Stage: disks Nov 6 00:29:39.803534 ignition[904]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:29:39.803545 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:29:39.804328 ignition[904]: disks: disks passed Nov 6 00:29:39.809190 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:29:39.804381 ignition[904]: Ignition finished successfully Nov 6 00:29:39.810874 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:29:39.813145 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:29:39.813769 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:29:39.823226 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:29:39.824963 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:29:39.832423 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:29:39.884853 systemd-fsck[914]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 6 00:29:40.313231 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:29:40.315521 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:29:40.529127 kernel: EXT4-fs (vda9): mounted filesystem d1cfc077-cc9a-4d2c-97de-8a87792eb8cf r/w with ordered data mode. Quota mode: none. Nov 6 00:29:40.530454 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:29:40.533723 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:29:40.538919 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:29:40.541376 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:29:40.543434 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 00:29:40.543479 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:29:40.543506 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:29:40.706915 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:29:40.719521 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (922) Nov 6 00:29:40.719557 kernel: BTRFS info (device vda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:29:40.719574 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:29:40.719593 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:29:40.719610 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:29:40.710662 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:29:40.722387 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:29:40.831034 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:29:40.837967 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:29:40.845339 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:29:40.850271 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:29:40.973937 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:29:40.978161 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:29:40.979866 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:29:41.012874 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:29:41.015730 kernel: BTRFS info (device vda6): last unmount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:29:41.037300 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:29:41.069684 ignition[1036]: INFO : Ignition 2.22.0 Nov 6 00:29:41.069684 ignition[1036]: INFO : Stage: mount Nov 6 00:29:41.072882 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:29:41.072882 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:29:41.072882 ignition[1036]: INFO : mount: mount passed Nov 6 00:29:41.072882 ignition[1036]: INFO : Ignition finished successfully Nov 6 00:29:41.083062 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:29:41.085216 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:29:41.533259 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:29:41.569564 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1048) Nov 6 00:29:41.569650 kernel: BTRFS info (device vda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:29:41.569676 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:29:41.576022 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:29:41.576137 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:29:41.578000 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:29:41.619082 ignition[1065]: INFO : Ignition 2.22.0 Nov 6 00:29:41.619082 ignition[1065]: INFO : Stage: files Nov 6 00:29:41.622500 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:29:41.622500 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:29:41.622500 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:29:41.622500 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:29:41.622500 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:29:41.633504 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:29:41.633504 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:29:41.633504 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:29:41.633504 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 6 00:29:41.633504 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 6 00:29:41.626225 unknown[1065]: wrote ssh authorized keys file for user: core Nov 6 00:29:41.681771 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:29:41.747029 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 6 00:29:41.750532 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:29:41.750532 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:29:41.750532 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:29:41.750532 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:29:41.750532 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:29:41.750532 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:29:41.750532 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:29:41.750532 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:29:41.775407 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:29:41.775407 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:29:41.775407 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 00:29:41.775407 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 00:29:41.775407 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 00:29:41.775407 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 6 00:29:42.236816 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 6 00:29:42.888133 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 00:29:42.888133 ignition[1065]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 6 00:29:42.896237 ignition[1065]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:29:42.896237 ignition[1065]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:29:42.896237 ignition[1065]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 6 00:29:42.896237 ignition[1065]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 6 00:29:42.896237 ignition[1065]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 00:29:42.896237 ignition[1065]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 00:29:42.896237 ignition[1065]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 6 00:29:42.896237 ignition[1065]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 6 00:29:42.943862 ignition[1065]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 00:29:42.953832 ignition[1065]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 00:29:42.956558 ignition[1065]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 6 00:29:42.956558 ignition[1065]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:29:42.956558 ignition[1065]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:29:42.956558 ignition[1065]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:29:42.956558 ignition[1065]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:29:42.956558 ignition[1065]: INFO : files: files passed Nov 6 00:29:42.956558 ignition[1065]: INFO : Ignition finished successfully Nov 6 00:29:42.965673 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:29:42.973755 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:29:42.975488 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:29:42.992267 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:29:42.992467 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:29:42.998628 initrd-setup-root-after-ignition[1097]: grep: /sysroot/oem/oem-release: No such file or directory Nov 6 00:29:43.001527 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:29:43.001527 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:29:43.005718 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:29:43.004704 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:29:43.010008 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:29:43.017220 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:29:43.110889 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:29:43.111048 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:29:43.114953 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:29:43.118380 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:29:43.122227 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:29:43.123595 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:29:43.171211 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:29:43.174443 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:29:43.214470 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:29:43.214673 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:29:43.216062 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:29:43.222329 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:29:43.223194 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:29:43.223432 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:29:43.230746 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:29:43.234229 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:29:43.235227 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:29:43.236117 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:29:43.236713 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:29:43.237565 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:29:43.238132 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:29:43.238671 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:29:43.239264 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:29:43.239831 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:29:43.240735 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:29:43.241586 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:29:43.241799 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:29:43.243341 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:29:43.243910 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:29:43.244730 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:29:43.244895 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:29:43.245322 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:29:43.245474 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:29:43.246611 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:29:43.246774 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:29:43.247345 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:29:43.247757 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:29:43.251169 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:29:43.251608 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:29:43.252169 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:29:43.252736 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:29:43.252872 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:29:43.253575 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:29:43.253695 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:29:43.254179 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:29:43.254361 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:29:43.254973 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:29:43.255154 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:29:43.256971 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:29:43.257641 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:29:43.257817 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:29:43.259654 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:29:43.260032 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:29:43.260211 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:29:43.260813 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:29:43.260961 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:29:43.261160 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:29:43.261322 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:29:43.268053 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:29:43.303238 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:29:43.379676 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:29:43.393442 ignition[1123]: INFO : Ignition 2.22.0 Nov 6 00:29:43.393442 ignition[1123]: INFO : Stage: umount Nov 6 00:29:43.397459 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:29:43.397459 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:29:43.401922 ignition[1123]: INFO : umount: umount passed Nov 6 00:29:43.401922 ignition[1123]: INFO : Ignition finished successfully Nov 6 00:29:43.407697 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:29:43.407925 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:29:43.409383 systemd[1]: Stopped target network.target - Network. Nov 6 00:29:43.409950 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:29:43.410074 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:29:43.411001 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:29:43.411130 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:29:43.412010 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:29:43.412169 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:29:43.413023 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:29:43.413133 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:29:43.414352 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:29:43.415045 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:29:43.446339 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:29:43.446541 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:29:43.454541 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:29:43.454762 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:29:43.464662 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:29:43.464835 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:29:43.468510 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:29:43.472495 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:29:43.472578 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:29:43.473502 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:29:43.473584 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:29:43.485664 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:29:43.489305 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:29:43.489449 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:29:43.490655 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:29:43.490731 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:29:43.491454 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:29:43.491509 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:29:43.492250 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:29:43.523910 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:29:43.524187 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:29:43.525534 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:29:43.525605 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:29:43.530587 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:29:43.530645 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:29:43.531127 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:29:43.531207 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:29:43.532532 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:29:43.532609 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:29:43.542276 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:29:43.542365 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:29:43.551957 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:29:43.552897 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:29:43.552982 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:29:43.559410 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:29:43.559496 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:29:43.560685 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:29:43.560765 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:29:43.579469 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:29:43.579627 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:29:43.590030 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:29:43.590274 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:29:43.591337 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:29:43.596526 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:29:43.622481 systemd[1]: Switching root. Nov 6 00:29:43.647813 systemd-journald[317]: Journal stopped Nov 6 00:29:45.352277 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Nov 6 00:29:45.352353 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:29:45.352369 kernel: SELinux: policy capability open_perms=1 Nov 6 00:29:45.352381 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:29:45.352399 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:29:45.352411 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:29:45.352423 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:29:45.352443 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:29:45.352455 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:29:45.352468 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:29:45.352480 kernel: audit: type=1403 audit(1762388984.325:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:29:45.352499 systemd[1]: Successfully loaded SELinux policy in 69.159ms. Nov 6 00:29:45.352524 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.801ms. Nov 6 00:29:45.352538 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:29:45.352554 systemd[1]: Detected virtualization kvm. Nov 6 00:29:45.352571 systemd[1]: Detected architecture x86-64. Nov 6 00:29:45.352584 systemd[1]: Detected first boot. Nov 6 00:29:45.352597 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 6 00:29:45.352611 zram_generator::config[1171]: No configuration found. Nov 6 00:29:45.352625 kernel: Guest personality initialized and is inactive Nov 6 00:29:45.352640 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 00:29:45.352652 kernel: Initialized host personality Nov 6 00:29:45.352664 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:29:45.352676 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:29:45.352688 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:29:45.352701 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:29:45.352714 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:29:45.352730 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:29:45.352743 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:29:45.352756 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:29:45.352769 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:29:45.352784 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:29:45.352802 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:29:45.352815 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:29:45.352830 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:29:45.352844 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:29:45.352857 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:29:45.352870 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:29:45.352883 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:29:45.352896 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:29:45.352909 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:29:45.352925 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:29:45.352938 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:29:45.352951 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:29:45.352964 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:29:45.352976 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:29:45.352989 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:29:45.353004 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:29:45.353017 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:29:45.353030 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:29:45.353042 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:29:45.353055 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:29:45.353068 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:29:45.353098 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:29:45.353112 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:29:45.353128 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:29:45.353141 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:29:45.353153 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:29:45.353166 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:29:45.353179 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:29:45.353193 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:29:45.353205 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:29:45.353221 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:45.353241 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:29:45.353254 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:29:45.353267 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:29:45.353281 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:29:45.353293 systemd[1]: Reached target machines.target - Containers. Nov 6 00:29:45.353309 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:29:45.353322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:29:45.353335 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:29:45.353348 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:29:45.353361 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:29:45.353374 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:29:45.353393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:29:45.353409 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:29:45.353422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:29:45.353435 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:29:45.353448 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:29:45.353461 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:29:45.353473 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:29:45.353486 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:29:45.353502 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:29:45.353515 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:29:45.353528 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:29:45.353541 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:29:45.353554 kernel: ACPI: bus type drm_connector registered Nov 6 00:29:45.353566 kernel: fuse: init (API version 7.41) Nov 6 00:29:45.353579 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:29:45.353594 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:29:45.353608 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:29:45.353621 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:45.353699 systemd-journald[1246]: Collecting audit messages is disabled. Nov 6 00:29:45.353727 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:29:45.353740 systemd-journald[1246]: Journal started Nov 6 00:29:45.353769 systemd-journald[1246]: Runtime Journal (/run/log/journal/4f3a1de440d0410bbfd6d804ffcd999b) is 6M, max 48.3M, 42.2M free. Nov 6 00:29:44.981032 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:29:45.001389 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 00:29:45.002074 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:29:45.356175 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:29:45.358669 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:29:45.360587 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:29:45.362316 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:29:45.364217 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:29:45.366154 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:29:45.368066 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:29:45.370326 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:29:45.372629 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:29:45.372864 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:29:45.375079 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:29:45.375340 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:29:45.377480 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:29:45.377695 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:29:45.379712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:29:45.379937 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:29:45.382245 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:29:45.382466 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:29:45.384537 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:29:45.384753 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:29:45.386859 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:29:45.389120 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:29:45.392272 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:29:45.394738 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:29:45.413564 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:29:45.416345 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 6 00:29:45.419915 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:29:45.422841 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:29:45.424667 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:29:45.424797 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:29:45.427972 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:29:45.430409 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:29:45.442742 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:29:45.447155 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:29:45.449416 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:29:45.450951 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:29:45.453066 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:29:45.454809 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:29:45.461042 systemd-journald[1246]: Time spent on flushing to /var/log/journal/4f3a1de440d0410bbfd6d804ffcd999b is 21.466ms for 963 entries. Nov 6 00:29:45.461042 systemd-journald[1246]: System Journal (/var/log/journal/4f3a1de440d0410bbfd6d804ffcd999b) is 8M, max 163.5M, 155.5M free. Nov 6 00:29:45.503200 systemd-journald[1246]: Received client request to flush runtime journal. Nov 6 00:29:45.463719 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:29:45.467852 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:29:45.486381 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:29:45.490173 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:29:45.492347 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:29:45.494842 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:29:45.497815 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:29:45.505726 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:29:45.512310 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:29:45.513787 kernel: loop1: detected capacity change from 0 to 128048 Nov 6 00:29:45.517178 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:29:45.535363 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:29:45.540040 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:29:45.544251 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:29:45.551148 kernel: loop2: detected capacity change from 0 to 110976 Nov 6 00:29:45.556325 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:29:45.563507 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:29:45.576720 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Nov 6 00:29:45.576746 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Nov 6 00:29:45.583678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:29:45.589118 kernel: loop3: detected capacity change from 0 to 224512 Nov 6 00:29:45.623643 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:29:45.628129 kernel: loop4: detected capacity change from 0 to 128048 Nov 6 00:29:45.639122 kernel: loop5: detected capacity change from 0 to 110976 Nov 6 00:29:45.651121 kernel: loop6: detected capacity change from 0 to 224512 Nov 6 00:29:45.657990 (sd-merge)[1317]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 6 00:29:45.663464 (sd-merge)[1317]: Merged extensions into '/usr'. Nov 6 00:29:45.668919 systemd[1]: Reload requested from client PID 1290 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:29:45.668939 systemd[1]: Reloading... Nov 6 00:29:45.720683 systemd-resolved[1305]: Positive Trust Anchors: Nov 6 00:29:45.720709 systemd-resolved[1305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:29:45.720716 systemd-resolved[1305]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 6 00:29:45.720748 systemd-resolved[1305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:29:45.725172 systemd-resolved[1305]: Defaulting to hostname 'linux'. Nov 6 00:29:45.732117 zram_generator::config[1346]: No configuration found. Nov 6 00:29:45.936104 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:29:45.936444 systemd[1]: Reloading finished in 266 ms. Nov 6 00:29:45.977634 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:29:45.980010 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:29:45.985177 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:29:46.011990 systemd[1]: Starting ensure-sysext.service... Nov 6 00:29:46.014750 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:29:46.030615 systemd[1]: Reload requested from client PID 1383 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:29:46.030644 systemd[1]: Reloading... Nov 6 00:29:46.039569 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:29:46.039637 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:29:46.040242 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:29:46.040709 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:29:46.042275 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:29:46.042715 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Nov 6 00:29:46.042837 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Nov 6 00:29:46.088298 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:29:46.088317 systemd-tmpfiles[1384]: Skipping /boot Nov 6 00:29:46.102538 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:29:46.102554 systemd-tmpfiles[1384]: Skipping /boot Nov 6 00:29:46.110126 zram_generator::config[1420]: No configuration found. Nov 6 00:29:46.301474 systemd[1]: Reloading finished in 270 ms. Nov 6 00:29:46.325513 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:29:46.350421 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:29:46.364119 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:29:46.367563 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:29:46.392582 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:29:46.396005 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:29:46.401364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:29:46.404954 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:29:46.410945 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:46.411474 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:29:46.413180 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:29:46.417773 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:29:46.422601 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:29:46.424490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:29:46.424604 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:29:46.424703 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:46.428764 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:46.430448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:29:46.430639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:29:46.430729 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:29:46.430819 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:46.435387 systemd-udevd[1458]: Using default interface naming scheme 'v257'. Nov 6 00:29:46.440989 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:29:46.443996 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:29:46.444299 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:29:46.447350 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:29:46.450373 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:29:46.450605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:29:46.453507 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:29:46.453802 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:29:46.465299 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:46.465579 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:29:46.469365 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:29:46.475323 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:29:46.478454 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:29:46.485010 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:29:46.486987 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:29:46.487685 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:29:46.488145 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:29:46.491065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:29:46.491625 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:29:46.494731 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:29:46.494948 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:29:46.497601 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:29:46.497840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:29:46.500721 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:29:46.500942 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:29:46.503607 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:29:46.509259 systemd[1]: Finished ensure-sysext.service. Nov 6 00:29:46.520050 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:29:46.521920 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:29:46.521990 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:29:46.523625 augenrules[1503]: No rules Nov 6 00:29:46.526235 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 00:29:46.527295 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:29:46.527575 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:29:46.547311 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:29:46.550299 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:29:46.586364 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:29:46.658775 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 00:29:46.664898 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:29:46.677147 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:29:46.679753 systemd-networkd[1508]: lo: Link UP Nov 6 00:29:46.679768 systemd-networkd[1508]: lo: Gained carrier Nov 6 00:29:46.681326 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:29:46.684532 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:29:46.684927 systemd-networkd[1508]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:29:46.684940 systemd-networkd[1508]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:29:46.688268 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 00:29:46.687604 systemd-networkd[1508]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:29:46.687680 systemd-networkd[1508]: eth0: Link UP Nov 6 00:29:46.688625 systemd[1]: Reached target network.target - Network. Nov 6 00:29:46.688681 systemd-networkd[1508]: eth0: Gained carrier Nov 6 00:29:46.688699 systemd-networkd[1508]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:29:46.693962 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:29:46.697902 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:29:46.704244 systemd-networkd[1508]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 00:29:46.706176 kernel: ACPI: button: Power Button [PWRF] Nov 6 00:29:46.706511 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Nov 6 00:29:47.333916 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:29:47.338144 systemd-resolved[1305]: Clock change detected. Flushing caches. Nov 6 00:29:47.338187 systemd-timesyncd[1510]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 6 00:29:47.338277 systemd-timesyncd[1510]: Initial clock synchronization to Thu 2025-11-06 00:29:47.333763 UTC. Nov 6 00:29:47.348133 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:29:47.358898 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 6 00:29:47.359708 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 00:29:47.367595 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:29:47.493289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:29:47.588892 kernel: kvm_amd: TSC scaling supported Nov 6 00:29:47.588988 kernel: kvm_amd: Nested Virtualization enabled Nov 6 00:29:47.589006 kernel: kvm_amd: Nested Paging enabled Nov 6 00:29:47.590879 kernel: kvm_amd: LBR virtualization supported Nov 6 00:29:47.590909 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 6 00:29:47.592023 kernel: kvm_amd: Virtual GIF supported Nov 6 00:29:47.634260 kernel: EDAC MC: Ver: 3.0.0 Nov 6 00:29:47.685533 ldconfig[1455]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:29:47.693690 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:29:47.705852 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:29:47.709950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:29:47.739208 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:29:47.741402 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:29:47.743254 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:29:47.745264 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:29:47.747443 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:29:47.749487 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:29:47.751311 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:29:47.753337 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:29:47.755365 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:29:47.755398 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:29:47.756860 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:29:47.759449 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:29:47.762927 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:29:47.766697 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:29:47.768917 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:29:47.771057 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:29:47.777088 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:29:47.779126 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:29:47.781695 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:29:47.784326 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:29:47.785904 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:29:47.787514 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:29:47.787546 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:29:47.788839 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:29:47.792302 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:29:47.794977 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:29:47.797875 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:29:47.801538 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:29:47.803391 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:29:47.804902 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:29:47.808727 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:29:47.812514 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:29:47.815503 jq[1575]: false Nov 6 00:29:47.815886 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:29:47.817862 extend-filesystems[1576]: Found /dev/vda6 Nov 6 00:29:47.823368 extend-filesystems[1576]: Found /dev/vda9 Nov 6 00:29:47.821654 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:29:47.826267 extend-filesystems[1576]: Checking size of /dev/vda9 Nov 6 00:29:47.827601 oslogin_cache_refresh[1577]: Refreshing passwd entry cache Nov 6 00:29:47.828146 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Refreshing passwd entry cache Nov 6 00:29:47.830290 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:29:47.832023 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:29:47.836409 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:29:47.837288 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Failure getting users, quitting Nov 6 00:29:47.837288 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:29:47.837219 oslogin_cache_refresh[1577]: Failure getting users, quitting Nov 6 00:29:47.837565 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Refreshing group entry cache Nov 6 00:29:47.837244 oslogin_cache_refresh[1577]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:29:47.837311 oslogin_cache_refresh[1577]: Refreshing group entry cache Nov 6 00:29:47.837957 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:29:47.840962 extend-filesystems[1576]: Resized partition /dev/vda9 Nov 6 00:29:47.843122 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:29:47.845762 extend-filesystems[1599]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 00:29:47.853102 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 6 00:29:47.853137 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Failure getting groups, quitting Nov 6 00:29:47.853137 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:29:47.849219 oslogin_cache_refresh[1577]: Failure getting groups, quitting Nov 6 00:29:47.849235 oslogin_cache_refresh[1577]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:29:47.856357 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:29:47.859454 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:29:47.860461 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:29:47.860839 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:29:47.862015 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:29:47.864821 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:29:47.865262 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:29:47.869378 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:29:47.869748 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:29:47.872192 jq[1600]: true Nov 6 00:29:47.877088 update_engine[1597]: I20251106 00:29:47.872403 1597 main.cc:92] Flatcar Update Engine starting Nov 6 00:29:47.895952 jq[1614]: true Nov 6 00:29:47.904979 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 6 00:29:47.912377 tar[1610]: linux-amd64/LICENSE Nov 6 00:29:47.921399 (ntainerd)[1616]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:29:47.929208 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:29:47.934868 tar[1610]: linux-amd64/helm Nov 6 00:29:47.934896 extend-filesystems[1599]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 00:29:47.934896 extend-filesystems[1599]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 6 00:29:47.934896 extend-filesystems[1599]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 6 00:29:47.929519 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:29:47.943193 extend-filesystems[1576]: Resized filesystem in /dev/vda9 Nov 6 00:29:47.955416 dbus-daemon[1573]: [system] SELinux support is enabled Nov 6 00:29:47.955824 bash[1644]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:29:47.957152 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:29:47.961896 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:29:47.964848 systemd-logind[1591]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 00:29:47.965255 systemd-logind[1591]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:29:47.965733 systemd-logind[1591]: New seat seat0. Nov 6 00:29:47.965917 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 00:29:47.968431 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:29:47.968471 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:29:47.970472 update_engine[1597]: I20251106 00:29:47.969779 1597 update_check_scheduler.cc:74] Next update check in 9m47s Nov 6 00:29:47.970828 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:29:47.970852 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:29:47.974122 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:29:47.978870 dbus-daemon[1573]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 6 00:29:47.979075 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:29:47.986492 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:29:48.060711 locksmithd[1646]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:29:48.138628 sshd_keygen[1607]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:29:48.168917 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:29:48.174836 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:29:48.193513 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:29:48.193822 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:29:48.197605 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:29:48.204230 containerd[1616]: time="2025-11-06T00:29:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:29:48.205325 containerd[1616]: time="2025-11-06T00:29:48.205297177Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:29:48.214735 containerd[1616]: time="2025-11-06T00:29:48.214679643Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.851µs" Nov 6 00:29:48.214735 containerd[1616]: time="2025-11-06T00:29:48.214727502Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:29:48.214821 containerd[1616]: time="2025-11-06T00:29:48.214748412Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:29:48.214995 containerd[1616]: time="2025-11-06T00:29:48.214967042Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:29:48.214995 containerd[1616]: time="2025-11-06T00:29:48.214990957Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:29:48.215068 containerd[1616]: time="2025-11-06T00:29:48.215032414Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:29:48.215129 containerd[1616]: time="2025-11-06T00:29:48.215104880Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:29:48.215129 containerd[1616]: time="2025-11-06T00:29:48.215123545Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:29:48.215429 containerd[1616]: time="2025-11-06T00:29:48.215399853Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:29:48.215429 containerd[1616]: time="2025-11-06T00:29:48.215421764Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:29:48.215477 containerd[1616]: time="2025-11-06T00:29:48.215441311Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:29:48.215477 containerd[1616]: time="2025-11-06T00:29:48.215452101Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:29:48.215578 containerd[1616]: time="2025-11-06T00:29:48.215550736Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:29:48.215837 containerd[1616]: time="2025-11-06T00:29:48.215809622Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:29:48.215869 containerd[1616]: time="2025-11-06T00:29:48.215847944Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:29:48.215869 containerd[1616]: time="2025-11-06T00:29:48.215858934Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:29:48.215907 containerd[1616]: time="2025-11-06T00:29:48.215885955Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:29:48.216146 containerd[1616]: time="2025-11-06T00:29:48.216119773Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:29:48.216218 containerd[1616]: time="2025-11-06T00:29:48.216195275Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:29:48.222490 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:29:48.224474 containerd[1616]: time="2025-11-06T00:29:48.224423276Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:29:48.224517 containerd[1616]: time="2025-11-06T00:29:48.224487827Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:29:48.224517 containerd[1616]: time="2025-11-06T00:29:48.224504297Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:29:48.224576 containerd[1616]: time="2025-11-06T00:29:48.224518775Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:29:48.224576 containerd[1616]: time="2025-11-06T00:29:48.224532871Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:29:48.224576 containerd[1616]: time="2025-11-06T00:29:48.224544723Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:29:48.224576 containerd[1616]: time="2025-11-06T00:29:48.224561074Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:29:48.224576 containerd[1616]: time="2025-11-06T00:29:48.224573968Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:29:48.224662 containerd[1616]: time="2025-11-06T00:29:48.224586271Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:29:48.224662 containerd[1616]: time="2025-11-06T00:29:48.224599135Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:29:48.224662 containerd[1616]: time="2025-11-06T00:29:48.224610166Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:29:48.224662 containerd[1616]: time="2025-11-06T00:29:48.224624383Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:29:48.226316 containerd[1616]: time="2025-11-06T00:29:48.226031431Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:29:48.226316 containerd[1616]: time="2025-11-06T00:29:48.226088348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:29:48.226316 containerd[1616]: time="2025-11-06T00:29:48.226110259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:29:48.226316 containerd[1616]: time="2025-11-06T00:29:48.226122903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:29:48.226316 containerd[1616]: time="2025-11-06T00:29:48.226134575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:29:48.226316 containerd[1616]: time="2025-11-06T00:29:48.226145525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:29:48.226316 containerd[1616]: time="2025-11-06T00:29:48.226158209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:29:48.226316 containerd[1616]: time="2025-11-06T00:29:48.226168659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:29:48.226316 containerd[1616]: time="2025-11-06T00:29:48.226180381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:29:48.226316 containerd[1616]: time="2025-11-06T00:29:48.226191872Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:29:48.226316 containerd[1616]: time="2025-11-06T00:29:48.226203404Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:29:48.226316 containerd[1616]: time="2025-11-06T00:29:48.226275279Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:29:48.226316 containerd[1616]: time="2025-11-06T00:29:48.226290026Z" level=info msg="Start snapshots syncer" Nov 6 00:29:48.226678 containerd[1616]: time="2025-11-06T00:29:48.226657596Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:29:48.227074 containerd[1616]: time="2025-11-06T00:29:48.226964411Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:29:48.227074 containerd[1616]: time="2025-11-06T00:29:48.227028141Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:29:48.227284 containerd[1616]: time="2025-11-06T00:29:48.227255447Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:29:48.227643 containerd[1616]: time="2025-11-06T00:29:48.227487712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:29:48.227643 containerd[1616]: time="2025-11-06T00:29:48.227510215Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:29:48.227643 containerd[1616]: time="2025-11-06T00:29:48.227520494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:29:48.227643 containerd[1616]: time="2025-11-06T00:29:48.227529260Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:29:48.227643 containerd[1616]: time="2025-11-06T00:29:48.227541163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:29:48.227643 containerd[1616]: time="2025-11-06T00:29:48.227551392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:29:48.227643 containerd[1616]: time="2025-11-06T00:29:48.227561371Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:29:48.227643 containerd[1616]: time="2025-11-06T00:29:48.227584464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:29:48.227643 containerd[1616]: time="2025-11-06T00:29:48.227595475Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:29:48.227643 containerd[1616]: time="2025-11-06T00:29:48.227604882Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:29:48.227633 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:29:48.227995 containerd[1616]: time="2025-11-06T00:29:48.227901759Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:29:48.228192 containerd[1616]: time="2025-11-06T00:29:48.227924512Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:29:48.228192 containerd[1616]: time="2025-11-06T00:29:48.228057241Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:29:48.228192 containerd[1616]: time="2025-11-06T00:29:48.228071137Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:29:48.228192 containerd[1616]: time="2025-11-06T00:29:48.228079522Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:29:48.228192 containerd[1616]: time="2025-11-06T00:29:48.228090012Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:29:48.228192 containerd[1616]: time="2025-11-06T00:29:48.228099951Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:29:48.228192 containerd[1616]: time="2025-11-06T00:29:48.228117564Z" level=info msg="runtime interface created" Nov 6 00:29:48.228192 containerd[1616]: time="2025-11-06T00:29:48.228122703Z" level=info msg="created NRI interface" Nov 6 00:29:48.228192 containerd[1616]: time="2025-11-06T00:29:48.228130548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:29:48.228192 containerd[1616]: time="2025-11-06T00:29:48.228140817Z" level=info msg="Connect containerd service" Nov 6 00:29:48.228192 containerd[1616]: time="2025-11-06T00:29:48.228162979Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:29:48.231126 containerd[1616]: time="2025-11-06T00:29:48.231055583Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:29:48.231388 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:29:48.233361 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:29:48.252039 tar[1610]: linux-amd64/README.md Nov 6 00:29:48.279288 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:29:48.403722 systemd-networkd[1508]: eth0: Gained IPv6LL Nov 6 00:29:48.407187 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:29:48.458963 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:29:48.462606 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 6 00:29:48.466739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:29:48.483986 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:29:48.518472 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:29:48.521106 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 6 00:29:48.521390 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 6 00:29:48.524921 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:29:48.574763 containerd[1616]: time="2025-11-06T00:29:48.574084625Z" level=info msg="Start subscribing containerd event" Nov 6 00:29:48.574763 containerd[1616]: time="2025-11-06T00:29:48.574189282Z" level=info msg="Start recovering state" Nov 6 00:29:48.574763 containerd[1616]: time="2025-11-06T00:29:48.574207225Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:29:48.574763 containerd[1616]: time="2025-11-06T00:29:48.574276786Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:29:48.574763 containerd[1616]: time="2025-11-06T00:29:48.574317753Z" level=info msg="Start event monitor" Nov 6 00:29:48.574763 containerd[1616]: time="2025-11-06T00:29:48.574334805Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:29:48.574763 containerd[1616]: time="2025-11-06T00:29:48.574345775Z" level=info msg="Start streaming server" Nov 6 00:29:48.574763 containerd[1616]: time="2025-11-06T00:29:48.574357467Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:29:48.574763 containerd[1616]: time="2025-11-06T00:29:48.574366153Z" level=info msg="runtime interface starting up..." Nov 6 00:29:48.574763 containerd[1616]: time="2025-11-06T00:29:48.574373307Z" level=info msg="starting plugins..." Nov 6 00:29:48.574763 containerd[1616]: time="2025-11-06T00:29:48.574391351Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:29:48.574763 containerd[1616]: time="2025-11-06T00:29:48.574594732Z" level=info msg="containerd successfully booted in 0.371177s" Nov 6 00:29:48.574827 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:29:49.817453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:29:49.819816 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:29:49.822047 systemd[1]: Startup finished in 2.631s (kernel) + 9.377s (initrd) + 4.937s (userspace) = 16.946s. Nov 6 00:29:49.840452 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:29:51.133215 kubelet[1715]: E1106 00:29:51.133142 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:29:51.137129 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:29:51.137349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:29:51.137772 systemd[1]: kubelet.service: Consumed 2.409s CPU time, 265.9M memory peak. Nov 6 00:29:57.778226 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:29:57.779571 systemd[1]: Started sshd@0-10.0.0.111:22-10.0.0.1:49504.service - OpenSSH per-connection server daemon (10.0.0.1:49504). Nov 6 00:29:57.861702 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 49504 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:57.864239 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:57.871576 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:29:57.872966 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:29:57.880228 systemd-logind[1591]: New session 1 of user core. Nov 6 00:29:57.897368 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:29:57.902119 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:29:57.933494 (systemd)[1733]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:29:57.936336 systemd-logind[1591]: New session c1 of user core. Nov 6 00:29:58.100887 systemd[1733]: Queued start job for default target default.target. Nov 6 00:29:58.123705 systemd[1733]: Created slice app.slice - User Application Slice. Nov 6 00:29:58.123754 systemd[1733]: Reached target paths.target - Paths. Nov 6 00:29:58.123811 systemd[1733]: Reached target timers.target - Timers. Nov 6 00:29:58.125674 systemd[1733]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:29:58.138760 systemd[1733]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:29:58.138954 systemd[1733]: Reached target sockets.target - Sockets. Nov 6 00:29:58.139023 systemd[1733]: Reached target basic.target - Basic System. Nov 6 00:29:58.139097 systemd[1733]: Reached target default.target - Main User Target. Nov 6 00:29:58.139154 systemd[1733]: Startup finished in 194ms. Nov 6 00:29:58.139978 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:29:58.143992 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:29:58.212592 systemd[1]: Started sshd@1-10.0.0.111:22-10.0.0.1:49506.service - OpenSSH per-connection server daemon (10.0.0.1:49506). Nov 6 00:29:58.277294 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 49506 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:58.279775 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:58.288104 systemd-logind[1591]: New session 2 of user core. Nov 6 00:29:58.302240 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:29:58.366844 sshd[1747]: Connection closed by 10.0.0.1 port 49506 Nov 6 00:29:58.368124 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:58.386870 systemd[1]: sshd@1-10.0.0.111:22-10.0.0.1:49506.service: Deactivated successfully. Nov 6 00:29:58.389291 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 00:29:58.390404 systemd-logind[1591]: Session 2 logged out. Waiting for processes to exit. Nov 6 00:29:58.394094 systemd[1]: Started sshd@2-10.0.0.111:22-10.0.0.1:49516.service - OpenSSH per-connection server daemon (10.0.0.1:49516). Nov 6 00:29:58.394698 systemd-logind[1591]: Removed session 2. Nov 6 00:29:58.453714 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 49516 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:58.455711 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:58.461312 systemd-logind[1591]: New session 3 of user core. Nov 6 00:29:58.479243 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:29:58.532450 sshd[1756]: Connection closed by 10.0.0.1 port 49516 Nov 6 00:29:58.532838 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:58.552442 systemd[1]: sshd@2-10.0.0.111:22-10.0.0.1:49516.service: Deactivated successfully. Nov 6 00:29:58.554568 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 00:29:58.555476 systemd-logind[1591]: Session 3 logged out. Waiting for processes to exit. Nov 6 00:29:58.560371 systemd[1]: Started sshd@3-10.0.0.111:22-10.0.0.1:49528.service - OpenSSH per-connection server daemon (10.0.0.1:49528). Nov 6 00:29:58.561379 systemd-logind[1591]: Removed session 3. Nov 6 00:29:58.637973 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 49528 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:58.640304 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:58.646855 systemd-logind[1591]: New session 4 of user core. Nov 6 00:29:58.654102 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:29:58.711988 sshd[1765]: Connection closed by 10.0.0.1 port 49528 Nov 6 00:29:58.712341 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:58.726750 systemd[1]: sshd@3-10.0.0.111:22-10.0.0.1:49528.service: Deactivated successfully. Nov 6 00:29:58.728856 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:29:58.729663 systemd-logind[1591]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:29:58.733341 systemd[1]: Started sshd@4-10.0.0.111:22-10.0.0.1:49542.service - OpenSSH per-connection server daemon (10.0.0.1:49542). Nov 6 00:29:58.734053 systemd-logind[1591]: Removed session 4. Nov 6 00:29:58.805907 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 49542 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:58.807826 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:58.813989 systemd-logind[1591]: New session 5 of user core. Nov 6 00:29:58.822200 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:29:58.888887 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:29:58.889282 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:29:58.905627 sudo[1775]: pam_unix(sudo:session): session closed for user root Nov 6 00:29:58.908354 sshd[1774]: Connection closed by 10.0.0.1 port 49542 Nov 6 00:29:58.908817 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:58.926861 systemd[1]: sshd@4-10.0.0.111:22-10.0.0.1:49542.service: Deactivated successfully. Nov 6 00:29:58.929481 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:29:58.930436 systemd-logind[1591]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:29:58.934050 systemd[1]: Started sshd@5-10.0.0.111:22-10.0.0.1:49558.service - OpenSSH per-connection server daemon (10.0.0.1:49558). Nov 6 00:29:58.934680 systemd-logind[1591]: Removed session 5. Nov 6 00:29:59.001428 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 49558 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:59.003560 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:59.009422 systemd-logind[1591]: New session 6 of user core. Nov 6 00:29:59.019159 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:29:59.078276 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:29:59.078618 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:29:59.088371 sudo[1786]: pam_unix(sudo:session): session closed for user root Nov 6 00:29:59.099251 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:29:59.099685 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:29:59.113916 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:29:59.168826 augenrules[1808]: No rules Nov 6 00:29:59.173111 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:29:59.173475 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:29:59.175206 sudo[1785]: pam_unix(sudo:session): session closed for user root Nov 6 00:29:59.177641 sshd[1784]: Connection closed by 10.0.0.1 port 49558 Nov 6 00:29:59.177985 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:59.193122 systemd[1]: sshd@5-10.0.0.111:22-10.0.0.1:49558.service: Deactivated successfully. Nov 6 00:29:59.195507 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:29:59.196450 systemd-logind[1591]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:29:59.200387 systemd[1]: Started sshd@6-10.0.0.111:22-10.0.0.1:49564.service - OpenSSH per-connection server daemon (10.0.0.1:49564). Nov 6 00:29:59.201197 systemd-logind[1591]: Removed session 6. Nov 6 00:29:59.268101 sshd[1817]: Accepted publickey for core from 10.0.0.1 port 49564 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:29:59.270208 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:59.275451 systemd-logind[1591]: New session 7 of user core. Nov 6 00:29:59.291259 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:29:59.352095 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:29:59.352600 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:30:00.341630 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:30:00.363613 (dockerd)[1842]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:30:01.172825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:30:01.175077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:30:01.239313 dockerd[1842]: time="2025-11-06T00:30:01.239210537Z" level=info msg="Starting up" Nov 6 00:30:01.240467 dockerd[1842]: time="2025-11-06T00:30:01.240419454Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:30:01.272136 dockerd[1842]: time="2025-11-06T00:30:01.272075140Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:30:01.451268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:30:01.456131 (kubelet)[1876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:30:01.655884 kubelet[1876]: E1106 00:30:01.655802 1876 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:30:01.663750 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:30:01.664031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:30:01.664431 systemd[1]: kubelet.service: Consumed 322ms CPU time, 111.3M memory peak. Nov 6 00:30:01.712584 dockerd[1842]: time="2025-11-06T00:30:01.712375631Z" level=info msg="Loading containers: start." Nov 6 00:30:01.728009 kernel: Initializing XFRM netlink socket Nov 6 00:30:02.088703 systemd-networkd[1508]: docker0: Link UP Nov 6 00:30:02.097051 dockerd[1842]: time="2025-11-06T00:30:02.096966015Z" level=info msg="Loading containers: done." Nov 6 00:30:02.115188 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1950154323-merged.mount: Deactivated successfully. Nov 6 00:30:02.116925 dockerd[1842]: time="2025-11-06T00:30:02.116855435Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:30:02.117095 dockerd[1842]: time="2025-11-06T00:30:02.117016827Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:30:02.117179 dockerd[1842]: time="2025-11-06T00:30:02.117150809Z" level=info msg="Initializing buildkit" Nov 6 00:30:02.155690 dockerd[1842]: time="2025-11-06T00:30:02.155603241Z" level=info msg="Completed buildkit initialization" Nov 6 00:30:02.162557 dockerd[1842]: time="2025-11-06T00:30:02.162085577Z" level=info msg="Daemon has completed initialization" Nov 6 00:30:02.162557 dockerd[1842]: time="2025-11-06T00:30:02.162192999Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:30:02.163067 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:30:03.097959 containerd[1616]: time="2025-11-06T00:30:03.097899022Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 6 00:30:03.982707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304963375.mount: Deactivated successfully. Nov 6 00:30:05.424870 containerd[1616]: time="2025-11-06T00:30:05.424813577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:05.425925 containerd[1616]: time="2025-11-06T00:30:05.425880778Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 6 00:30:05.427678 containerd[1616]: time="2025-11-06T00:30:05.427629778Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:05.430498 containerd[1616]: time="2025-11-06T00:30:05.430440950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:05.431588 containerd[1616]: time="2025-11-06T00:30:05.431543648Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.333581287s" Nov 6 00:30:05.431651 containerd[1616]: time="2025-11-06T00:30:05.431595095Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 6 00:30:05.432281 containerd[1616]: time="2025-11-06T00:30:05.432228543Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 6 00:30:07.858549 containerd[1616]: time="2025-11-06T00:30:07.858433794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:07.859339 containerd[1616]: time="2025-11-06T00:30:07.859254092Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 6 00:30:07.861251 containerd[1616]: time="2025-11-06T00:30:07.861196034Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:07.864149 containerd[1616]: time="2025-11-06T00:30:07.864094620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:07.865475 containerd[1616]: time="2025-11-06T00:30:07.865417400Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.433145416s" Nov 6 00:30:07.865475 containerd[1616]: time="2025-11-06T00:30:07.865464609Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 6 00:30:07.866329 containerd[1616]: time="2025-11-06T00:30:07.866247988Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 6 00:30:10.570679 containerd[1616]: time="2025-11-06T00:30:10.570571670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:10.580710 containerd[1616]: time="2025-11-06T00:30:10.580625254Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 6 00:30:10.584429 containerd[1616]: time="2025-11-06T00:30:10.584321105Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:10.593501 containerd[1616]: time="2025-11-06T00:30:10.593389602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:10.595128 containerd[1616]: time="2025-11-06T00:30:10.595068651Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.728761101s" Nov 6 00:30:10.595128 containerd[1616]: time="2025-11-06T00:30:10.595108325Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 6 00:30:10.596017 containerd[1616]: time="2025-11-06T00:30:10.595959051Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 6 00:30:11.674386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 00:30:11.677674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:30:12.194756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:30:12.224622 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:30:12.561858 kubelet[2158]: E1106 00:30:12.561698 2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:30:12.570185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:30:12.570453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:30:12.570879 systemd[1]: kubelet.service: Consumed 696ms CPU time, 110.6M memory peak. Nov 6 00:30:13.360921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432811241.mount: Deactivated successfully. Nov 6 00:30:17.870711 containerd[1616]: time="2025-11-06T00:30:17.870627180Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 6 00:30:17.871388 containerd[1616]: time="2025-11-06T00:30:17.871348904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:17.872255 containerd[1616]: time="2025-11-06T00:30:17.872190061Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:17.872928 containerd[1616]: time="2025-11-06T00:30:17.872896796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:17.873557 containerd[1616]: time="2025-11-06T00:30:17.873513363Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 7.277516922s" Nov 6 00:30:17.873557 containerd[1616]: time="2025-11-06T00:30:17.873548118Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 6 00:30:17.874071 containerd[1616]: time="2025-11-06T00:30:17.874037676Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 6 00:30:18.730707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362928582.mount: Deactivated successfully. Nov 6 00:30:20.533150 containerd[1616]: time="2025-11-06T00:30:20.533067106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:20.595788 containerd[1616]: time="2025-11-06T00:30:20.595706814Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 6 00:30:20.637184 containerd[1616]: time="2025-11-06T00:30:20.637077768Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:20.689403 containerd[1616]: time="2025-11-06T00:30:20.689333340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:20.690569 containerd[1616]: time="2025-11-06T00:30:20.690527531Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.816450501s" Nov 6 00:30:20.690569 containerd[1616]: time="2025-11-06T00:30:20.690563349Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 6 00:30:20.691324 containerd[1616]: time="2025-11-06T00:30:20.691295806Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 00:30:21.569458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3033979859.mount: Deactivated successfully. Nov 6 00:30:21.580518 containerd[1616]: time="2025-11-06T00:30:21.580439665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:30:21.581309 containerd[1616]: time="2025-11-06T00:30:21.581244267Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 6 00:30:21.582655 containerd[1616]: time="2025-11-06T00:30:21.582603672Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:30:21.584684 containerd[1616]: time="2025-11-06T00:30:21.584618803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:30:21.585516 containerd[1616]: time="2025-11-06T00:30:21.585464955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 894.142409ms" Nov 6 00:30:21.585516 containerd[1616]: time="2025-11-06T00:30:21.585510953Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 00:30:21.586108 containerd[1616]: time="2025-11-06T00:30:21.586029436Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 6 00:30:22.672983 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 00:30:22.675172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:30:22.989020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:30:23.002692 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:30:23.128922 kubelet[2234]: E1106 00:30:23.128726 2234 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:30:23.133707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:30:23.134009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:30:23.135220 systemd[1]: kubelet.service: Consumed 305ms CPU time, 110.7M memory peak. Nov 6 00:30:23.642040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3676309688.mount: Deactivated successfully. Nov 6 00:30:26.965954 containerd[1616]: time="2025-11-06T00:30:26.965854203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:26.966692 containerd[1616]: time="2025-11-06T00:30:26.966612577Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 6 00:30:26.967871 containerd[1616]: time="2025-11-06T00:30:26.967809247Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:26.970975 containerd[1616]: time="2025-11-06T00:30:26.970873695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:26.971922 containerd[1616]: time="2025-11-06T00:30:26.971860043Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.385804657s" Nov 6 00:30:26.971922 containerd[1616]: time="2025-11-06T00:30:26.971910199Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 6 00:30:29.490843 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:30:29.491148 systemd[1]: kubelet.service: Consumed 305ms CPU time, 110.7M memory peak. Nov 6 00:30:29.499346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:30:29.574974 systemd[1]: Reload requested from client PID 2329 ('systemctl') (unit session-7.scope)... Nov 6 00:30:29.575014 systemd[1]: Reloading... Nov 6 00:30:29.802991 zram_generator::config[2376]: No configuration found. Nov 6 00:30:30.549609 systemd[1]: Reloading finished in 973 ms. Nov 6 00:30:30.698813 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:30:30.699066 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:30:30.700594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:30:30.700676 systemd[1]: kubelet.service: Consumed 220ms CPU time, 98.2M memory peak. Nov 6 00:30:30.708015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:30:31.172592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:30:31.200805 (kubelet)[2420]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:30:31.371288 kubelet[2420]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:30:31.371288 kubelet[2420]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:30:31.371288 kubelet[2420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:30:31.371797 kubelet[2420]: I1106 00:30:31.371320 2420 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:30:31.718372 kubelet[2420]: I1106 00:30:31.718257 2420 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 6 00:30:31.718372 kubelet[2420]: I1106 00:30:31.718328 2420 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:30:31.718752 kubelet[2420]: I1106 00:30:31.718702 2420 server.go:954] "Client rotation is on, will bootstrap in background" Nov 6 00:30:31.803233 kubelet[2420]: I1106 00:30:31.800091 2420 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:30:31.804849 kubelet[2420]: E1106 00:30:31.804789 2420 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:30:31.834731 kubelet[2420]: I1106 00:30:31.834683 2420 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:30:31.848192 kubelet[2420]: I1106 00:30:31.848079 2420 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:30:31.858392 kubelet[2420]: I1106 00:30:31.858246 2420 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:30:31.858688 kubelet[2420]: I1106 00:30:31.858372 2420 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:30:31.858884 kubelet[2420]: I1106 00:30:31.858703 2420 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:30:31.858884 kubelet[2420]: I1106 00:30:31.858717 2420 container_manager_linux.go:304] "Creating device plugin manager" Nov 6 00:30:31.859443 kubelet[2420]: I1106 00:30:31.859003 2420 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:30:31.868593 kubelet[2420]: I1106 00:30:31.866154 2420 kubelet.go:446] "Attempting to sync node with API server" Nov 6 00:30:31.868593 kubelet[2420]: I1106 00:30:31.868406 2420 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:30:31.869801 kubelet[2420]: I1106 00:30:31.868976 2420 kubelet.go:352] "Adding apiserver pod source" Nov 6 00:30:31.869801 kubelet[2420]: I1106 00:30:31.869021 2420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:30:31.869990 kubelet[2420]: W1106 00:30:31.869892 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Nov 6 00:30:31.870056 kubelet[2420]: E1106 00:30:31.870004 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:30:31.871668 kubelet[2420]: W1106 00:30:31.870387 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Nov 6 00:30:31.871668 kubelet[2420]: E1106 00:30:31.871192 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:30:31.879444 kubelet[2420]: I1106 00:30:31.879372 2420 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:30:31.880041 kubelet[2420]: I1106 00:30:31.880006 2420 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 6 00:30:31.880882 kubelet[2420]: W1106 00:30:31.880831 2420 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:30:31.883529 kubelet[2420]: I1106 00:30:31.883475 2420 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:30:31.883529 kubelet[2420]: I1106 00:30:31.883537 2420 server.go:1287] "Started kubelet" Nov 6 00:30:31.884844 kubelet[2420]: I1106 00:30:31.884765 2420 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:30:31.886617 kubelet[2420]: I1106 00:30:31.886564 2420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:30:31.889794 kubelet[2420]: I1106 00:30:31.888215 2420 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:30:31.889794 kubelet[2420]: I1106 00:30:31.888673 2420 server.go:479] "Adding debug handlers to kubelet server" Nov 6 00:30:31.892128 kubelet[2420]: I1106 00:30:31.889879 2420 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:30:31.903203 kubelet[2420]: E1106 00:30:31.902433 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:31.903203 kubelet[2420]: I1106 00:30:31.902543 2420 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:30:31.903203 kubelet[2420]: I1106 00:30:31.902726 2420 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:30:31.903203 kubelet[2420]: I1106 00:30:31.902831 2420 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:30:31.903558 kubelet[2420]: I1106 00:30:31.903527 2420 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:30:31.911706 kubelet[2420]: W1106 00:30:31.909868 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Nov 6 00:30:31.911706 kubelet[2420]: E1106 00:30:31.909958 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:30:31.922290 kubelet[2420]: I1106 00:30:31.919749 2420 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:30:31.922290 kubelet[2420]: E1106 00:30:31.920499 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="200ms" Nov 6 00:30:31.922801 kubelet[2420]: I1106 00:30:31.922776 2420 factory.go:221] Registration of the containerd container factory successfully Nov 6 00:30:31.922889 kubelet[2420]: I1106 00:30:31.922875 2420 factory.go:221] Registration of the systemd container factory successfully Nov 6 00:30:31.924097 kubelet[2420]: E1106 00:30:31.924064 2420 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:30:31.933626 kubelet[2420]: E1106 00:30:31.932362 2420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.111:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.111:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875437ca975fe8d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 00:30:31.883505293 +0000 UTC m=+0.658856257,LastTimestamp:2025-11-06 00:30:31.883505293 +0000 UTC m=+0.658856257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 00:30:31.961877 kubelet[2420]: I1106 00:30:31.961817 2420 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:30:31.961877 kubelet[2420]: I1106 00:30:31.961853 2420 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:30:31.961877 kubelet[2420]: I1106 00:30:31.961887 2420 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:30:32.003389 kubelet[2420]: E1106 00:30:32.003165 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:32.104586 kubelet[2420]: E1106 00:30:32.103467 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:32.122343 kubelet[2420]: E1106 00:30:32.122179 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="400ms" Nov 6 00:30:32.204635 kubelet[2420]: E1106 00:30:32.204569 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:32.307527 kubelet[2420]: E1106 00:30:32.304714 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:32.405960 kubelet[2420]: E1106 00:30:32.405837 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:32.507125 kubelet[2420]: E1106 00:30:32.507022 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:32.523770 kubelet[2420]: E1106 00:30:32.523080 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="800ms" Nov 6 00:30:32.608057 kubelet[2420]: E1106 00:30:32.607840 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:32.708487 kubelet[2420]: E1106 00:30:32.708407 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:32.812207 kubelet[2420]: E1106 00:30:32.808777 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:32.814042 kubelet[2420]: W1106 00:30:32.813171 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Nov 6 00:30:32.814042 kubelet[2420]: E1106 00:30:32.813261 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:30:32.888309 update_engine[1597]: I20251106 00:30:32.887898 1597 update_attempter.cc:509] Updating boot flags... Nov 6 00:30:32.911678 kubelet[2420]: E1106 00:30:32.909856 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:32.965972 kubelet[2420]: W1106 00:30:32.961850 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Nov 6 00:30:32.965972 kubelet[2420]: E1106 00:30:32.961955 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:30:32.984491 kubelet[2420]: I1106 00:30:32.984414 2420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 6 00:30:32.987965 kubelet[2420]: I1106 00:30:32.987575 2420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 6 00:30:32.987965 kubelet[2420]: I1106 00:30:32.987616 2420 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 6 00:30:32.987965 kubelet[2420]: I1106 00:30:32.987651 2420 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:30:32.987965 kubelet[2420]: I1106 00:30:32.987664 2420 kubelet.go:2382] "Starting kubelet main sync loop" Nov 6 00:30:32.987965 kubelet[2420]: E1106 00:30:32.987728 2420 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:30:32.990956 kubelet[2420]: W1106 00:30:32.990268 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Nov 6 00:30:32.990956 kubelet[2420]: E1106 00:30:32.990336 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:30:33.015256 kubelet[2420]: E1106 00:30:33.012276 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:33.031761 kubelet[2420]: I1106 00:30:33.030130 2420 policy_none.go:49] "None policy: Start" Nov 6 00:30:33.031761 kubelet[2420]: I1106 00:30:33.030186 2420 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:30:33.031761 kubelet[2420]: I1106 00:30:33.030203 2420 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:30:33.082964 kubelet[2420]: W1106 00:30:33.080054 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Nov 6 00:30:33.082964 kubelet[2420]: E1106 00:30:33.080152 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:30:33.088716 kubelet[2420]: E1106 00:30:33.088464 2420 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:30:33.094683 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:30:33.113027 kubelet[2420]: E1106 00:30:33.112512 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:33.219046 kubelet[2420]: E1106 00:30:33.217303 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:33.258986 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:30:33.289347 kubelet[2420]: E1106 00:30:33.289295 2420 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:30:33.309021 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:30:33.317487 kubelet[2420]: E1106 00:30:33.317436 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:33.328817 kubelet[2420]: E1106 00:30:33.328745 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="1.6s" Nov 6 00:30:33.376659 kubelet[2420]: I1106 00:30:33.373610 2420 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 6 00:30:33.376659 kubelet[2420]: I1106 00:30:33.373969 2420 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:30:33.376659 kubelet[2420]: I1106 00:30:33.373989 2420 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:30:33.376659 kubelet[2420]: I1106 00:30:33.376389 2420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:30:33.395641 kubelet[2420]: E1106 00:30:33.395449 2420 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:30:33.395641 kubelet[2420]: E1106 00:30:33.395554 2420 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 6 00:30:33.477276 kubelet[2420]: I1106 00:30:33.476741 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:30:33.477745 kubelet[2420]: E1106 00:30:33.477320 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Nov 6 00:30:33.682675 kubelet[2420]: I1106 00:30:33.682376 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:30:33.682822 kubelet[2420]: E1106 00:30:33.682799 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Nov 6 00:30:33.719374 kubelet[2420]: I1106 00:30:33.719313 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:30:33.719374 kubelet[2420]: I1106 00:30:33.719371 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:30:33.719567 kubelet[2420]: I1106 00:30:33.719397 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:30:33.719567 kubelet[2420]: I1106 00:30:33.719419 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:30:33.719567 kubelet[2420]: I1106 00:30:33.719444 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 6 00:30:33.719567 kubelet[2420]: I1106 00:30:33.719464 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:30:33.730564 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 6 00:30:33.754808 kubelet[2420]: E1106 00:30:33.754745 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:30:33.767516 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 6 00:30:33.798084 kubelet[2420]: E1106 00:30:33.797670 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:30:33.821118 kubelet[2420]: I1106 00:30:33.819773 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a83e7ee3a718d9091644b4d837ece6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"50a83e7ee3a718d9091644b4d837ece6\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:30:33.821118 kubelet[2420]: I1106 00:30:33.819865 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a83e7ee3a718d9091644b4d837ece6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"50a83e7ee3a718d9091644b4d837ece6\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:30:33.821118 kubelet[2420]: I1106 00:30:33.819918 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a83e7ee3a718d9091644b4d837ece6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"50a83e7ee3a718d9091644b4d837ece6\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:30:33.820880 systemd[1]: Created slice kubepods-burstable-pod50a83e7ee3a718d9091644b4d837ece6.slice - libcontainer container kubepods-burstable-pod50a83e7ee3a718d9091644b4d837ece6.slice. Nov 6 00:30:33.827637 kubelet[2420]: E1106 00:30:33.827271 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:30:33.909236 kubelet[2420]: E1106 00:30:33.909164 2420 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:30:34.057202 kubelet[2420]: E1106 00:30:34.056424 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:34.059763 containerd[1616]: time="2025-11-06T00:30:34.058438185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 6 00:30:34.089883 kubelet[2420]: I1106 00:30:34.089843 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:30:34.090472 kubelet[2420]: E1106 00:30:34.090443 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Nov 6 00:30:34.100699 kubelet[2420]: E1106 00:30:34.099194 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:34.100834 containerd[1616]: time="2025-11-06T00:30:34.099801138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 6 00:30:34.128350 kubelet[2420]: E1106 00:30:34.128258 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:34.131376 containerd[1616]: time="2025-11-06T00:30:34.128918310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:50a83e7ee3a718d9091644b4d837ece6,Namespace:kube-system,Attempt:0,}" Nov 6 00:30:34.228266 containerd[1616]: time="2025-11-06T00:30:34.220338212Z" level=info msg="connecting to shim 52e4c42e32b5ae74e1dbedb016ee9fe4bfb7534b61eee935b72a897a41e13bff" address="unix:///run/containerd/s/b747879e2f5ac597eff20cdc700f1554ba155b9c65aa44e05ad285f5a4e23d8e" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:34.256452 kubelet[2420]: W1106 00:30:34.252926 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Nov 6 00:30:34.256452 kubelet[2420]: E1106 00:30:34.253022 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:30:34.390413 containerd[1616]: time="2025-11-06T00:30:34.390190594Z" level=info msg="connecting to shim 9b1cd216c507dded95d12666a180ca05be8d841b4724f64e975f4853548193d4" address="unix:///run/containerd/s/895e188826d49a57caa5f2d913618a48ee0d68e8e4288bed5d3f3dac7a6345e9" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:34.393879 containerd[1616]: time="2025-11-06T00:30:34.393825094Z" level=info msg="connecting to shim 0320bf9214fa91fe6c24452aae87a4e4282a4c6f8616046ed9a7f5c86bbcfc56" address="unix:///run/containerd/s/295db31ffa27bef9ab785b16b692d5bfdd66d421a1141980778c383364276e0f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:34.490087 kubelet[2420]: W1106 00:30:34.489872 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Nov 6 00:30:34.490087 kubelet[2420]: E1106 00:30:34.490014 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:30:34.614909 systemd[1]: Started cri-containerd-0320bf9214fa91fe6c24452aae87a4e4282a4c6f8616046ed9a7f5c86bbcfc56.scope - libcontainer container 0320bf9214fa91fe6c24452aae87a4e4282a4c6f8616046ed9a7f5c86bbcfc56. Nov 6 00:30:34.658418 systemd[1]: Started cri-containerd-52e4c42e32b5ae74e1dbedb016ee9fe4bfb7534b61eee935b72a897a41e13bff.scope - libcontainer container 52e4c42e32b5ae74e1dbedb016ee9fe4bfb7534b61eee935b72a897a41e13bff. Nov 6 00:30:34.665821 systemd[1]: Started cri-containerd-9b1cd216c507dded95d12666a180ca05be8d841b4724f64e975f4853548193d4.scope - libcontainer container 9b1cd216c507dded95d12666a180ca05be8d841b4724f64e975f4853548193d4. Nov 6 00:30:34.897777 kubelet[2420]: I1106 00:30:34.893592 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:30:34.897777 kubelet[2420]: E1106 00:30:34.893900 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Nov 6 00:30:34.936060 kubelet[2420]: E1106 00:30:34.936005 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="3.2s" Nov 6 00:30:34.938165 containerd[1616]: time="2025-11-06T00:30:34.937606455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"52e4c42e32b5ae74e1dbedb016ee9fe4bfb7534b61eee935b72a897a41e13bff\"" Nov 6 00:30:34.943354 kubelet[2420]: E1106 00:30:34.940683 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:34.947202 containerd[1616]: time="2025-11-06T00:30:34.946953118Z" level=info msg="CreateContainer within sandbox \"52e4c42e32b5ae74e1dbedb016ee9fe4bfb7534b61eee935b72a897a41e13bff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:30:34.975551 containerd[1616]: time="2025-11-06T00:30:34.975478568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:50a83e7ee3a718d9091644b4d837ece6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0320bf9214fa91fe6c24452aae87a4e4282a4c6f8616046ed9a7f5c86bbcfc56\"" Nov 6 00:30:34.983233 kubelet[2420]: E1106 00:30:34.980848 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:34.983405 containerd[1616]: time="2025-11-06T00:30:34.982729525Z" level=info msg="CreateContainer within sandbox \"0320bf9214fa91fe6c24452aae87a4e4282a4c6f8616046ed9a7f5c86bbcfc56\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:30:35.024180 containerd[1616]: time="2025-11-06T00:30:35.021337520Z" level=info msg="Container 3173bebe827878534cdb9fb8596d922defba983d329a62f1cbc1aab457398a3c: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:30:35.046581 containerd[1616]: time="2025-11-06T00:30:35.044637933Z" level=info msg="Container a068f4edfb43e051223d03fedb07eb616bd941b7cd536962a609833122af0a2e: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:30:35.046581 containerd[1616]: time="2025-11-06T00:30:35.046006012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b1cd216c507dded95d12666a180ca05be8d841b4724f64e975f4853548193d4\"" Nov 6 00:30:35.059245 kubelet[2420]: E1106 00:30:35.056421 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:35.059409 containerd[1616]: time="2025-11-06T00:30:35.058104201Z" level=info msg="CreateContainer within sandbox \"9b1cd216c507dded95d12666a180ca05be8d841b4724f64e975f4853548193d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:30:35.080265 containerd[1616]: time="2025-11-06T00:30:35.080017691Z" level=info msg="CreateContainer within sandbox \"0320bf9214fa91fe6c24452aae87a4e4282a4c6f8616046ed9a7f5c86bbcfc56\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a068f4edfb43e051223d03fedb07eb616bd941b7cd536962a609833122af0a2e\"" Nov 6 00:30:35.086205 containerd[1616]: time="2025-11-06T00:30:35.084782345Z" level=info msg="StartContainer for \"a068f4edfb43e051223d03fedb07eb616bd941b7cd536962a609833122af0a2e\"" Nov 6 00:30:35.096585 containerd[1616]: time="2025-11-06T00:30:35.096511576Z" level=info msg="connecting to shim a068f4edfb43e051223d03fedb07eb616bd941b7cd536962a609833122af0a2e" address="unix:///run/containerd/s/295db31ffa27bef9ab785b16b692d5bfdd66d421a1141980778c383364276e0f" protocol=ttrpc version=3 Nov 6 00:30:35.105190 containerd[1616]: time="2025-11-06T00:30:35.101628557Z" level=info msg="CreateContainer within sandbox \"52e4c42e32b5ae74e1dbedb016ee9fe4bfb7534b61eee935b72a897a41e13bff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3173bebe827878534cdb9fb8596d922defba983d329a62f1cbc1aab457398a3c\"" Nov 6 00:30:35.108555 containerd[1616]: time="2025-11-06T00:30:35.108502011Z" level=info msg="StartContainer for \"3173bebe827878534cdb9fb8596d922defba983d329a62f1cbc1aab457398a3c\"" Nov 6 00:30:35.110404 containerd[1616]: time="2025-11-06T00:30:35.110367270Z" level=info msg="connecting to shim 3173bebe827878534cdb9fb8596d922defba983d329a62f1cbc1aab457398a3c" address="unix:///run/containerd/s/b747879e2f5ac597eff20cdc700f1554ba155b9c65aa44e05ad285f5a4e23d8e" protocol=ttrpc version=3 Nov 6 00:30:35.159267 containerd[1616]: time="2025-11-06T00:30:35.159215639Z" level=info msg="Container 930369498f1c22b636a671139f92545940521c3e0c9b669ad0873e852f0850b1: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:30:35.202648 systemd[1]: Started cri-containerd-3173bebe827878534cdb9fb8596d922defba983d329a62f1cbc1aab457398a3c.scope - libcontainer container 3173bebe827878534cdb9fb8596d922defba983d329a62f1cbc1aab457398a3c. Nov 6 00:30:35.219408 containerd[1616]: time="2025-11-06T00:30:35.219338339Z" level=info msg="CreateContainer within sandbox \"9b1cd216c507dded95d12666a180ca05be8d841b4724f64e975f4853548193d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"930369498f1c22b636a671139f92545940521c3e0c9b669ad0873e852f0850b1\"" Nov 6 00:30:35.231234 containerd[1616]: time="2025-11-06T00:30:35.231173280Z" level=info msg="StartContainer for \"930369498f1c22b636a671139f92545940521c3e0c9b669ad0873e852f0850b1\"" Nov 6 00:30:35.236251 containerd[1616]: time="2025-11-06T00:30:35.236199239Z" level=info msg="connecting to shim 930369498f1c22b636a671139f92545940521c3e0c9b669ad0873e852f0850b1" address="unix:///run/containerd/s/895e188826d49a57caa5f2d913618a48ee0d68e8e4288bed5d3f3dac7a6345e9" protocol=ttrpc version=3 Nov 6 00:30:35.297001 kubelet[2420]: W1106 00:30:35.296683 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Nov 6 00:30:35.299261 kubelet[2420]: E1106 00:30:35.297583 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:30:35.322835 systemd[1]: Started cri-containerd-a068f4edfb43e051223d03fedb07eb616bd941b7cd536962a609833122af0a2e.scope - libcontainer container a068f4edfb43e051223d03fedb07eb616bd941b7cd536962a609833122af0a2e. Nov 6 00:30:35.398509 systemd[1]: Started cri-containerd-930369498f1c22b636a671139f92545940521c3e0c9b669ad0873e852f0850b1.scope - libcontainer container 930369498f1c22b636a671139f92545940521c3e0c9b669ad0873e852f0850b1. Nov 6 00:30:35.650284 containerd[1616]: time="2025-11-06T00:30:35.649946349Z" level=info msg="StartContainer for \"a068f4edfb43e051223d03fedb07eb616bd941b7cd536962a609833122af0a2e\" returns successfully" Nov 6 00:30:35.697725 kubelet[2420]: W1106 00:30:35.697424 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Nov 6 00:30:35.697725 kubelet[2420]: E1106 00:30:35.697611 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:30:35.712509 containerd[1616]: time="2025-11-06T00:30:35.712173791Z" level=info msg="StartContainer for \"3173bebe827878534cdb9fb8596d922defba983d329a62f1cbc1aab457398a3c\" returns successfully" Nov 6 00:30:35.818178 containerd[1616]: time="2025-11-06T00:30:35.814507752Z" level=info msg="StartContainer for \"930369498f1c22b636a671139f92545940521c3e0c9b669ad0873e852f0850b1\" returns successfully" Nov 6 00:30:36.040371 kubelet[2420]: E1106 00:30:36.040319 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:30:36.040530 kubelet[2420]: E1106 00:30:36.040504 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:36.054264 kubelet[2420]: E1106 00:30:36.054198 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:30:36.054504 kubelet[2420]: E1106 00:30:36.054480 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:36.056254 kubelet[2420]: E1106 00:30:36.056224 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:30:36.056407 kubelet[2420]: E1106 00:30:36.056384 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:36.497735 kubelet[2420]: I1106 00:30:36.497654 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:30:37.059832 kubelet[2420]: E1106 00:30:37.058399 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:30:37.059832 kubelet[2420]: E1106 00:30:37.058565 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:37.059832 kubelet[2420]: E1106 00:30:37.059446 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:30:37.059832 kubelet[2420]: E1106 00:30:37.059595 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:37.059832 kubelet[2420]: E1106 00:30:37.058930 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:30:37.059832 kubelet[2420]: E1106 00:30:37.059746 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:38.063917 kubelet[2420]: E1106 00:30:38.063826 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:30:38.064872 kubelet[2420]: E1106 00:30:38.064020 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:38.064872 kubelet[2420]: E1106 00:30:38.064646 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:30:38.065480 kubelet[2420]: E1106 00:30:38.065414 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:39.301134 kubelet[2420]: E1106 00:30:39.300376 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:30:39.301134 kubelet[2420]: E1106 00:30:39.300559 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:39.478003 kubelet[2420]: E1106 00:30:39.467623 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:30:39.478003 kubelet[2420]: E1106 00:30:39.469801 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:39.777786 kubelet[2420]: E1106 00:30:39.777727 2420 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 6 00:30:39.854918 kubelet[2420]: I1106 00:30:39.854637 2420 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 00:30:39.891772 kubelet[2420]: I1106 00:30:39.891712 2420 apiserver.go:52] "Watching apiserver" Nov 6 00:30:39.903039 kubelet[2420]: I1106 00:30:39.902982 2420 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:30:39.903691 kubelet[2420]: I1106 00:30:39.903056 2420 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:30:39.946641 kubelet[2420]: E1106 00:30:39.944835 2420 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:30:39.946641 kubelet[2420]: I1106 00:30:39.944883 2420 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:30:39.957471 kubelet[2420]: E1106 00:30:39.957412 2420 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 6 00:30:39.957887 kubelet[2420]: I1106 00:30:39.957713 2420 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:30:39.966637 kubelet[2420]: E1106 00:30:39.966541 2420 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 6 00:30:44.756594 systemd[1]: Reload requested from client PID 2713 ('systemctl') (unit session-7.scope)... Nov 6 00:30:44.758166 systemd[1]: Reloading... Nov 6 00:30:44.974006 zram_generator::config[2763]: No configuration found. Nov 6 00:30:45.883455 systemd[1]: Reloading finished in 1124 ms. Nov 6 00:30:45.920513 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:30:45.943834 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:30:45.945420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:30:45.946318 systemd[1]: kubelet.service: Consumed 1.837s CPU time, 133.9M memory peak. Nov 6 00:30:45.957680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:30:46.430088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:30:46.454794 (kubelet)[2801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:30:46.587994 kubelet[2801]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:30:46.587994 kubelet[2801]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:30:46.587994 kubelet[2801]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:30:46.587994 kubelet[2801]: I1106 00:30:46.585608 2801 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:30:46.613992 kubelet[2801]: I1106 00:30:46.612397 2801 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 6 00:30:46.613992 kubelet[2801]: I1106 00:30:46.612440 2801 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:30:46.613992 kubelet[2801]: I1106 00:30:46.612792 2801 server.go:954] "Client rotation is on, will bootstrap in background" Nov 6 00:30:46.619023 kubelet[2801]: I1106 00:30:46.618186 2801 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 6 00:30:46.629996 kubelet[2801]: I1106 00:30:46.629859 2801 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:30:46.662458 kubelet[2801]: I1106 00:30:46.658960 2801 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:30:46.676332 kubelet[2801]: I1106 00:30:46.676260 2801 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:30:46.686149 kubelet[2801]: I1106 00:30:46.678194 2801 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:30:46.686149 kubelet[2801]: I1106 00:30:46.684251 2801 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:30:46.686149 kubelet[2801]: I1106 00:30:46.684541 2801 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:30:46.686149 kubelet[2801]: I1106 00:30:46.684558 2801 container_manager_linux.go:304] "Creating device plugin manager" Nov 6 00:30:46.686460 kubelet[2801]: I1106 00:30:46.684648 2801 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:30:46.686460 kubelet[2801]: I1106 00:30:46.684900 2801 kubelet.go:446] "Attempting to sync node with API server" Nov 6 00:30:46.686460 kubelet[2801]: I1106 00:30:46.684976 2801 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:30:46.686460 kubelet[2801]: I1106 00:30:46.685013 2801 kubelet.go:352] "Adding apiserver pod source" Nov 6 00:30:46.686460 kubelet[2801]: I1106 00:30:46.685772 2801 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:30:46.693143 kubelet[2801]: I1106 00:30:46.693089 2801 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:30:46.694124 kubelet[2801]: I1106 00:30:46.694068 2801 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 6 00:30:46.698968 kubelet[2801]: I1106 00:30:46.697091 2801 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:30:46.698968 kubelet[2801]: I1106 00:30:46.697644 2801 server.go:1287] "Started kubelet" Nov 6 00:30:46.698968 kubelet[2801]: I1106 00:30:46.697823 2801 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:30:46.702509 kubelet[2801]: I1106 00:30:46.702473 2801 server.go:479] "Adding debug handlers to kubelet server" Nov 6 00:30:46.703463 kubelet[2801]: I1106 00:30:46.703437 2801 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:30:46.709229 kubelet[2801]: I1106 00:30:46.704858 2801 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:30:46.709229 kubelet[2801]: I1106 00:30:46.705148 2801 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:30:46.709229 kubelet[2801]: I1106 00:30:46.705325 2801 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:30:46.709229 kubelet[2801]: E1106 00:30:46.705707 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:46.709229 kubelet[2801]: I1106 00:30:46.705733 2801 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:30:46.709229 kubelet[2801]: I1106 00:30:46.705903 2801 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:30:46.709229 kubelet[2801]: I1106 00:30:46.707145 2801 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:30:46.741974 kubelet[2801]: I1106 00:30:46.726291 2801 factory.go:221] Registration of the systemd container factory successfully Nov 6 00:30:46.741974 kubelet[2801]: I1106 00:30:46.726681 2801 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:30:46.886738 kubelet[2801]: E1106 00:30:46.885625 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:30:46.925315 kubelet[2801]: I1106 00:30:46.925229 2801 factory.go:221] Registration of the containerd container factory successfully Nov 6 00:30:46.944054 kubelet[2801]: I1106 00:30:46.943853 2801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 6 00:30:46.950883 kubelet[2801]: I1106 00:30:46.950530 2801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 6 00:30:46.950883 kubelet[2801]: I1106 00:30:46.950590 2801 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 6 00:30:46.950883 kubelet[2801]: I1106 00:30:46.950620 2801 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:30:46.950883 kubelet[2801]: I1106 00:30:46.950630 2801 kubelet.go:2382] "Starting kubelet main sync loop" Nov 6 00:30:46.950883 kubelet[2801]: E1106 00:30:46.950704 2801 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:30:47.004002 kubelet[2801]: I1106 00:30:47.003476 2801 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:30:47.004002 kubelet[2801]: I1106 00:30:47.003498 2801 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:30:47.004002 kubelet[2801]: I1106 00:30:47.003522 2801 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:30:47.004002 kubelet[2801]: I1106 00:30:47.003715 2801 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:30:47.004002 kubelet[2801]: I1106 00:30:47.003728 2801 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:30:47.004002 kubelet[2801]: I1106 00:30:47.003749 2801 policy_none.go:49] "None policy: Start" Nov 6 00:30:47.004002 kubelet[2801]: I1106 00:30:47.003760 2801 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:30:47.004002 kubelet[2801]: I1106 00:30:47.003772 2801 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:30:47.004002 kubelet[2801]: I1106 00:30:47.003894 2801 state_mem.go:75] "Updated machine memory state" Nov 6 00:30:47.014511 kubelet[2801]: I1106 00:30:47.014470 2801 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 6 00:30:47.015243 kubelet[2801]: I1106 00:30:47.015174 2801 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:30:47.015362 kubelet[2801]: I1106 00:30:47.015316 2801 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:30:47.021356 kubelet[2801]: E1106 00:30:47.017822 2801 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:30:47.021356 kubelet[2801]: I1106 00:30:47.020257 2801 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:30:47.053572 kubelet[2801]: I1106 00:30:47.052053 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:30:47.053572 kubelet[2801]: I1106 00:30:47.052250 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:30:47.053572 kubelet[2801]: I1106 00:30:47.052053 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:30:47.142816 kubelet[2801]: I1106 00:30:47.142699 2801 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:30:47.186321 kubelet[2801]: I1106 00:30:47.186213 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:30:47.186321 kubelet[2801]: I1106 00:30:47.186298 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:30:47.186640 kubelet[2801]: I1106 00:30:47.186350 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a83e7ee3a718d9091644b4d837ece6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"50a83e7ee3a718d9091644b4d837ece6\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:30:47.186640 kubelet[2801]: I1106 00:30:47.186382 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a83e7ee3a718d9091644b4d837ece6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"50a83e7ee3a718d9091644b4d837ece6\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:30:47.186640 kubelet[2801]: I1106 00:30:47.186417 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:30:47.186640 kubelet[2801]: I1106 00:30:47.186442 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:30:47.186640 kubelet[2801]: I1106 00:30:47.186465 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a83e7ee3a718d9091644b4d837ece6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"50a83e7ee3a718d9091644b4d837ece6\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:30:47.186889 kubelet[2801]: I1106 00:30:47.186505 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:30:47.186889 kubelet[2801]: I1106 00:30:47.186548 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 6 00:30:47.376126 kubelet[2801]: E1106 00:30:47.375613 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:47.382663 kubelet[2801]: E1106 00:30:47.381720 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:47.387770 kubelet[2801]: E1106 00:30:47.386867 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:47.574998 kubelet[2801]: I1106 00:30:47.573680 2801 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 6 00:30:47.574998 kubelet[2801]: I1106 00:30:47.573807 2801 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 00:30:47.689151 kubelet[2801]: I1106 00:30:47.686913 2801 apiserver.go:52] "Watching apiserver" Nov 6 00:30:47.707062 kubelet[2801]: I1106 00:30:47.706881 2801 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:30:47.788493 kubelet[2801]: I1106 00:30:47.788266 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.788248317 podStartE2EDuration="788.248317ms" podCreationTimestamp="2025-11-06 00:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:30:47.787887126 +0000 UTC m=+1.324015635" watchObservedRunningTime="2025-11-06 00:30:47.788248317 +0000 UTC m=+1.324376816" Nov 6 00:30:47.840305 kubelet[2801]: I1106 00:30:47.839840 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.839794539 podStartE2EDuration="839.794539ms" podCreationTimestamp="2025-11-06 00:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:30:47.838143119 +0000 UTC m=+1.374271618" watchObservedRunningTime="2025-11-06 00:30:47.839794539 +0000 UTC m=+1.375923038" Nov 6 00:30:47.930056 kubelet[2801]: I1106 00:30:47.929309 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.929283998 podStartE2EDuration="929.283998ms" podCreationTimestamp="2025-11-06 00:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:30:47.888061456 +0000 UTC m=+1.424189965" watchObservedRunningTime="2025-11-06 00:30:47.929283998 +0000 UTC m=+1.465412497" Nov 6 00:30:47.976328 kubelet[2801]: E1106 00:30:47.974510 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:47.978710 kubelet[2801]: E1106 00:30:47.978383 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:47.984069 kubelet[2801]: E1106 00:30:47.983104 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:48.977596 kubelet[2801]: E1106 00:30:48.977539 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:48.979103 kubelet[2801]: E1106 00:30:48.977747 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:49.248530 kernel: hrtimer: interrupt took 4039803 ns Nov 6 00:30:49.984830 kubelet[2801]: E1106 00:30:49.983366 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:49.984830 kubelet[2801]: E1106 00:30:49.984333 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:50.987222 kubelet[2801]: E1106 00:30:50.987151 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:51.591683 kubelet[2801]: I1106 00:30:51.591631 2801 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:30:51.598977 containerd[1616]: time="2025-11-06T00:30:51.597241270Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:30:51.600519 kubelet[2801]: I1106 00:30:51.599759 2801 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:30:52.080868 systemd[1]: Created slice kubepods-besteffort-pod8a4a9286_93ce_4488_9a19_af9e447143d2.slice - libcontainer container kubepods-besteffort-pod8a4a9286_93ce_4488_9a19_af9e447143d2.slice. Nov 6 00:30:52.161198 kubelet[2801]: I1106 00:30:52.161108 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a4a9286-93ce-4488-9a19-af9e447143d2-kube-proxy\") pod \"kube-proxy-sgxz9\" (UID: \"8a4a9286-93ce-4488-9a19-af9e447143d2\") " pod="kube-system/kube-proxy-sgxz9" Nov 6 00:30:52.161198 kubelet[2801]: I1106 00:30:52.161169 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a4a9286-93ce-4488-9a19-af9e447143d2-lib-modules\") pod \"kube-proxy-sgxz9\" (UID: \"8a4a9286-93ce-4488-9a19-af9e447143d2\") " pod="kube-system/kube-proxy-sgxz9" Nov 6 00:30:52.161198 kubelet[2801]: I1106 00:30:52.161198 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljsld\" (UniqueName: \"kubernetes.io/projected/8a4a9286-93ce-4488-9a19-af9e447143d2-kube-api-access-ljsld\") pod \"kube-proxy-sgxz9\" (UID: \"8a4a9286-93ce-4488-9a19-af9e447143d2\") " pod="kube-system/kube-proxy-sgxz9" Nov 6 00:30:52.161891 kubelet[2801]: I1106 00:30:52.161226 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a4a9286-93ce-4488-9a19-af9e447143d2-xtables-lock\") pod \"kube-proxy-sgxz9\" (UID: \"8a4a9286-93ce-4488-9a19-af9e447143d2\") " pod="kube-system/kube-proxy-sgxz9" Nov 6 00:30:52.383851 systemd[1]: Created slice kubepods-besteffort-podbfbd106e_83cd_4d61_a6db_21fb4178dbcd.slice - libcontainer container kubepods-besteffort-podbfbd106e_83cd_4d61_a6db_21fb4178dbcd.slice. Nov 6 00:30:52.411982 kubelet[2801]: E1106 00:30:52.410734 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:52.412154 containerd[1616]: time="2025-11-06T00:30:52.411831970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sgxz9,Uid:8a4a9286-93ce-4488-9a19-af9e447143d2,Namespace:kube-system,Attempt:0,}" Nov 6 00:30:52.466727 kubelet[2801]: I1106 00:30:52.465915 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csrd2\" (UniqueName: \"kubernetes.io/projected/bfbd106e-83cd-4d61-a6db-21fb4178dbcd-kube-api-access-csrd2\") pod \"tigera-operator-7dcd859c48-c6mk4\" (UID: \"bfbd106e-83cd-4d61-a6db-21fb4178dbcd\") " pod="tigera-operator/tigera-operator-7dcd859c48-c6mk4" Nov 6 00:30:52.466727 kubelet[2801]: I1106 00:30:52.466364 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bfbd106e-83cd-4d61-a6db-21fb4178dbcd-var-lib-calico\") pod \"tigera-operator-7dcd859c48-c6mk4\" (UID: \"bfbd106e-83cd-4d61-a6db-21fb4178dbcd\") " pod="tigera-operator/tigera-operator-7dcd859c48-c6mk4" Nov 6 00:30:52.542364 containerd[1616]: time="2025-11-06T00:30:52.542225136Z" level=info msg="connecting to shim 6c3bca690b764fee8d4c7ccfe054e59607cda7283b0c680771d55be0f9813064" address="unix:///run/containerd/s/2a70943696384a6cae89dbfeee8c3a1094a2d88386a9db6d12a20a9d2deac80f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:52.650270 systemd[1]: Started cri-containerd-6c3bca690b764fee8d4c7ccfe054e59607cda7283b0c680771d55be0f9813064.scope - libcontainer container 6c3bca690b764fee8d4c7ccfe054e59607cda7283b0c680771d55be0f9813064. Nov 6 00:30:52.695513 containerd[1616]: time="2025-11-06T00:30:52.695406855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-c6mk4,Uid:bfbd106e-83cd-4d61-a6db-21fb4178dbcd,Namespace:tigera-operator,Attempt:0,}" Nov 6 00:30:52.761966 containerd[1616]: time="2025-11-06T00:30:52.760198842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sgxz9,Uid:8a4a9286-93ce-4488-9a19-af9e447143d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c3bca690b764fee8d4c7ccfe054e59607cda7283b0c680771d55be0f9813064\"" Nov 6 00:30:52.764422 kubelet[2801]: E1106 00:30:52.761476 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:52.771932 containerd[1616]: time="2025-11-06T00:30:52.770766068Z" level=info msg="CreateContainer within sandbox \"6c3bca690b764fee8d4c7ccfe054e59607cda7283b0c680771d55be0f9813064\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:30:52.801295 containerd[1616]: time="2025-11-06T00:30:52.801225860Z" level=info msg="Container f964e681dbd2fa8413ad238dfcbcea595ff43f13cdf7b07a0094e83c0c14e8b8: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:30:52.815601 containerd[1616]: time="2025-11-06T00:30:52.815467457Z" level=info msg="connecting to shim ddb2e38ea6de5bdab35417a388be93c9362bfcaffadf6e2b25eef00e82fd8e3c" address="unix:///run/containerd/s/5c0f8b488cb644c6fc4a020f64ba525de82fa8a370d9ec612b680b0198c60044" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:30:52.822202 containerd[1616]: time="2025-11-06T00:30:52.822141920Z" level=info msg="CreateContainer within sandbox \"6c3bca690b764fee8d4c7ccfe054e59607cda7283b0c680771d55be0f9813064\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f964e681dbd2fa8413ad238dfcbcea595ff43f13cdf7b07a0094e83c0c14e8b8\"" Nov 6 00:30:52.826096 containerd[1616]: time="2025-11-06T00:30:52.826045984Z" level=info msg="StartContainer for \"f964e681dbd2fa8413ad238dfcbcea595ff43f13cdf7b07a0094e83c0c14e8b8\"" Nov 6 00:30:52.830670 containerd[1616]: time="2025-11-06T00:30:52.830136187Z" level=info msg="connecting to shim f964e681dbd2fa8413ad238dfcbcea595ff43f13cdf7b07a0094e83c0c14e8b8" address="unix:///run/containerd/s/2a70943696384a6cae89dbfeee8c3a1094a2d88386a9db6d12a20a9d2deac80f" protocol=ttrpc version=3 Nov 6 00:30:52.858708 systemd[1]: Started cri-containerd-ddb2e38ea6de5bdab35417a388be93c9362bfcaffadf6e2b25eef00e82fd8e3c.scope - libcontainer container ddb2e38ea6de5bdab35417a388be93c9362bfcaffadf6e2b25eef00e82fd8e3c. Nov 6 00:30:52.868727 systemd[1]: Started cri-containerd-f964e681dbd2fa8413ad238dfcbcea595ff43f13cdf7b07a0094e83c0c14e8b8.scope - libcontainer container f964e681dbd2fa8413ad238dfcbcea595ff43f13cdf7b07a0094e83c0c14e8b8. Nov 6 00:30:53.023830 containerd[1616]: time="2025-11-06T00:30:53.023760085Z" level=info msg="StartContainer for \"f964e681dbd2fa8413ad238dfcbcea595ff43f13cdf7b07a0094e83c0c14e8b8\" returns successfully" Nov 6 00:30:53.031311 containerd[1616]: time="2025-11-06T00:30:53.031191491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-c6mk4,Uid:bfbd106e-83cd-4d61-a6db-21fb4178dbcd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ddb2e38ea6de5bdab35417a388be93c9362bfcaffadf6e2b25eef00e82fd8e3c\"" Nov 6 00:30:53.038649 containerd[1616]: time="2025-11-06T00:30:53.038598440Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 6 00:30:53.447667 kubelet[2801]: E1106 00:30:53.441327 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:54.023603 kubelet[2801]: E1106 00:30:54.023112 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:54.025556 kubelet[2801]: E1106 00:30:54.025512 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:54.062968 kubelet[2801]: I1106 00:30:54.060377 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sgxz9" podStartSLOduration=2.06035729 podStartE2EDuration="2.06035729s" podCreationTimestamp="2025-11-06 00:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:30:54.060172733 +0000 UTC m=+7.596301232" watchObservedRunningTime="2025-11-06 00:30:54.06035729 +0000 UTC m=+7.596485789" Nov 6 00:30:54.933260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1097413282.mount: Deactivated successfully. Nov 6 00:30:55.029870 kubelet[2801]: E1106 00:30:55.028366 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:30:57.275579 containerd[1616]: time="2025-11-06T00:30:57.275484649Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:57.278834 containerd[1616]: time="2025-11-06T00:30:57.278773760Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 6 00:30:57.284029 containerd[1616]: time="2025-11-06T00:30:57.283985676Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:57.290068 containerd[1616]: time="2025-11-06T00:30:57.289826843Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:30:57.291173 containerd[1616]: time="2025-11-06T00:30:57.290407294Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.251760103s" Nov 6 00:30:57.291173 containerd[1616]: time="2025-11-06T00:30:57.290444544Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 6 00:30:57.301014 containerd[1616]: time="2025-11-06T00:30:57.298592537Z" level=info msg="CreateContainer within sandbox \"ddb2e38ea6de5bdab35417a388be93c9362bfcaffadf6e2b25eef00e82fd8e3c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 6 00:30:57.337858 containerd[1616]: time="2025-11-06T00:30:57.337787298Z" level=info msg="Container 740cf024f9d65f4e276d830b5f9a6cd8709466856d2ec01246c03ad9eb780456: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:30:57.352969 containerd[1616]: time="2025-11-06T00:30:57.351157677Z" level=info msg="CreateContainer within sandbox \"ddb2e38ea6de5bdab35417a388be93c9362bfcaffadf6e2b25eef00e82fd8e3c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"740cf024f9d65f4e276d830b5f9a6cd8709466856d2ec01246c03ad9eb780456\"" Nov 6 00:30:57.356059 containerd[1616]: time="2025-11-06T00:30:57.354175759Z" level=info msg="StartContainer for \"740cf024f9d65f4e276d830b5f9a6cd8709466856d2ec01246c03ad9eb780456\"" Nov 6 00:30:57.356059 containerd[1616]: time="2025-11-06T00:30:57.355498003Z" level=info msg="connecting to shim 740cf024f9d65f4e276d830b5f9a6cd8709466856d2ec01246c03ad9eb780456" address="unix:///run/containerd/s/5c0f8b488cb644c6fc4a020f64ba525de82fa8a370d9ec612b680b0198c60044" protocol=ttrpc version=3 Nov 6 00:30:57.468279 systemd[1]: Started cri-containerd-740cf024f9d65f4e276d830b5f9a6cd8709466856d2ec01246c03ad9eb780456.scope - libcontainer container 740cf024f9d65f4e276d830b5f9a6cd8709466856d2ec01246c03ad9eb780456. Nov 6 00:30:57.588264 containerd[1616]: time="2025-11-06T00:30:57.588052599Z" level=info msg="StartContainer for \"740cf024f9d65f4e276d830b5f9a6cd8709466856d2ec01246c03ad9eb780456\" returns successfully" Nov 6 00:30:58.102643 kubelet[2801]: I1106 00:30:58.102351 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-c6mk4" podStartSLOduration=1.843147382 podStartE2EDuration="6.102330061s" podCreationTimestamp="2025-11-06 00:30:52 +0000 UTC" firstStartedPulling="2025-11-06 00:30:53.032874566 +0000 UTC m=+6.569003065" lastFinishedPulling="2025-11-06 00:30:57.292057245 +0000 UTC m=+10.828185744" observedRunningTime="2025-11-06 00:30:58.0996371 +0000 UTC m=+11.635765599" watchObservedRunningTime="2025-11-06 00:30:58.102330061 +0000 UTC m=+11.638458590" Nov 6 00:31:05.551815 sudo[1821]: pam_unix(sudo:session): session closed for user root Nov 6 00:31:05.566180 sshd[1820]: Connection closed by 10.0.0.1 port 49564 Nov 6 00:31:05.566308 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Nov 6 00:31:05.579893 systemd[1]: sshd@6-10.0.0.111:22-10.0.0.1:49564.service: Deactivated successfully. Nov 6 00:31:05.587670 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:31:05.589276 systemd[1]: session-7.scope: Consumed 6.223s CPU time, 223.2M memory peak. Nov 6 00:31:05.595334 systemd-logind[1591]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:31:05.598030 systemd-logind[1591]: Removed session 7. Nov 6 00:31:15.806754 systemd[1]: Created slice kubepods-besteffort-podb1ae62c8_f069_4b30_b650_6ad7c6a1aece.slice - libcontainer container kubepods-besteffort-podb1ae62c8_f069_4b30_b650_6ad7c6a1aece.slice. Nov 6 00:31:15.936636 kubelet[2801]: I1106 00:31:15.936546 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1ae62c8-f069-4b30-b650-6ad7c6a1aece-tigera-ca-bundle\") pod \"calico-typha-67457dfc98-r2wp9\" (UID: \"b1ae62c8-f069-4b30-b650-6ad7c6a1aece\") " pod="calico-system/calico-typha-67457dfc98-r2wp9" Nov 6 00:31:15.936636 kubelet[2801]: I1106 00:31:15.936624 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b1ae62c8-f069-4b30-b650-6ad7c6a1aece-typha-certs\") pod \"calico-typha-67457dfc98-r2wp9\" (UID: \"b1ae62c8-f069-4b30-b650-6ad7c6a1aece\") " pod="calico-system/calico-typha-67457dfc98-r2wp9" Nov 6 00:31:15.938024 kubelet[2801]: I1106 00:31:15.936657 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwntm\" (UniqueName: \"kubernetes.io/projected/b1ae62c8-f069-4b30-b650-6ad7c6a1aece-kube-api-access-nwntm\") pod \"calico-typha-67457dfc98-r2wp9\" (UID: \"b1ae62c8-f069-4b30-b650-6ad7c6a1aece\") " pod="calico-system/calico-typha-67457dfc98-r2wp9" Nov 6 00:31:16.047217 systemd[1]: Created slice kubepods-besteffort-podbb8e49a1_f7ea_4949_a4e5_b6eb47fb9892.slice - libcontainer container kubepods-besteffort-podbb8e49a1_f7ea_4949_a4e5_b6eb47fb9892.slice. Nov 6 00:31:16.117622 kubelet[2801]: E1106 00:31:16.117322 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:16.130228 containerd[1616]: time="2025-11-06T00:31:16.127459884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67457dfc98-r2wp9,Uid:b1ae62c8-f069-4b30-b650-6ad7c6a1aece,Namespace:calico-system,Attempt:0,}" Nov 6 00:31:16.138300 kubelet[2801]: I1106 00:31:16.138217 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892-node-certs\") pod \"calico-node-lh9tf\" (UID: \"bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892\") " pod="calico-system/calico-node-lh9tf" Nov 6 00:31:16.138300 kubelet[2801]: I1106 00:31:16.138289 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892-cni-bin-dir\") pod \"calico-node-lh9tf\" (UID: \"bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892\") " pod="calico-system/calico-node-lh9tf" Nov 6 00:31:16.138500 kubelet[2801]: I1106 00:31:16.138332 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892-flexvol-driver-host\") pod \"calico-node-lh9tf\" (UID: \"bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892\") " pod="calico-system/calico-node-lh9tf" Nov 6 00:31:16.138500 kubelet[2801]: I1106 00:31:16.138366 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892-policysync\") pod \"calico-node-lh9tf\" (UID: \"bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892\") " pod="calico-system/calico-node-lh9tf" Nov 6 00:31:16.138500 kubelet[2801]: I1106 00:31:16.138389 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892-var-run-calico\") pod \"calico-node-lh9tf\" (UID: \"bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892\") " pod="calico-system/calico-node-lh9tf" Nov 6 00:31:16.138500 kubelet[2801]: I1106 00:31:16.138421 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2442s\" (UniqueName: \"kubernetes.io/projected/bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892-kube-api-access-2442s\") pod \"calico-node-lh9tf\" (UID: \"bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892\") " pod="calico-system/calico-node-lh9tf" Nov 6 00:31:16.138500 kubelet[2801]: I1106 00:31:16.138448 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892-lib-modules\") pod \"calico-node-lh9tf\" (UID: \"bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892\") " pod="calico-system/calico-node-lh9tf" Nov 6 00:31:16.138691 kubelet[2801]: I1106 00:31:16.138474 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892-tigera-ca-bundle\") pod \"calico-node-lh9tf\" (UID: \"bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892\") " pod="calico-system/calico-node-lh9tf" Nov 6 00:31:16.138691 kubelet[2801]: I1106 00:31:16.138569 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892-cni-log-dir\") pod \"calico-node-lh9tf\" (UID: \"bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892\") " pod="calico-system/calico-node-lh9tf" Nov 6 00:31:16.138691 kubelet[2801]: I1106 00:31:16.138601 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892-var-lib-calico\") pod \"calico-node-lh9tf\" (UID: \"bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892\") " pod="calico-system/calico-node-lh9tf" Nov 6 00:31:16.138691 kubelet[2801]: I1106 00:31:16.138628 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892-cni-net-dir\") pod \"calico-node-lh9tf\" (UID: \"bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892\") " pod="calico-system/calico-node-lh9tf" Nov 6 00:31:16.138691 kubelet[2801]: I1106 00:31:16.138659 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892-xtables-lock\") pod \"calico-node-lh9tf\" (UID: \"bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892\") " pod="calico-system/calico-node-lh9tf" Nov 6 00:31:16.249970 kubelet[2801]: E1106 00:31:16.249009 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.249970 kubelet[2801]: W1106 00:31:16.249043 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.249970 kubelet[2801]: E1106 00:31:16.249084 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.258265 containerd[1616]: time="2025-11-06T00:31:16.258206153Z" level=info msg="connecting to shim f36b2f03eb67440f3da5fece831e2c2e205517d25535e6190bcde69c5d1c9cd1" address="unix:///run/containerd/s/1c39c2fb81d5cd3cfdd40bb8e5ed0468efcdb20f19a7c56f55e3d4de6258127f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:31:16.262114 kubelet[2801]: E1106 00:31:16.262054 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.262114 kubelet[2801]: W1106 00:31:16.262110 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.262315 kubelet[2801]: E1106 00:31:16.262150 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.272541 kubelet[2801]: E1106 00:31:16.272439 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:31:16.280258 kubelet[2801]: E1106 00:31:16.280209 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.280258 kubelet[2801]: W1106 00:31:16.280250 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.280420 kubelet[2801]: E1106 00:31:16.280278 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.295488 kubelet[2801]: E1106 00:31:16.295441 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.295488 kubelet[2801]: W1106 00:31:16.295479 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.301274 kubelet[2801]: E1106 00:31:16.301032 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.309243 kubelet[2801]: E1106 00:31:16.308361 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.309243 kubelet[2801]: W1106 00:31:16.308388 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.309243 kubelet[2801]: E1106 00:31:16.308416 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.312485 kubelet[2801]: E1106 00:31:16.310359 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.312485 kubelet[2801]: W1106 00:31:16.310376 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.312485 kubelet[2801]: E1106 00:31:16.310396 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.319429 kubelet[2801]: E1106 00:31:16.319384 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.319429 kubelet[2801]: W1106 00:31:16.319421 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.319608 kubelet[2801]: E1106 00:31:16.319450 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.320280 kubelet[2801]: E1106 00:31:16.320222 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.320360 kubelet[2801]: W1106 00:31:16.320260 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.320360 kubelet[2801]: E1106 00:31:16.320328 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.320844 kubelet[2801]: E1106 00:31:16.320716 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.320844 kubelet[2801]: W1106 00:31:16.320735 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.320844 kubelet[2801]: E1106 00:31:16.320773 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.321213 kubelet[2801]: E1106 00:31:16.321166 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.321213 kubelet[2801]: W1106 00:31:16.321193 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.321311 kubelet[2801]: E1106 00:31:16.321205 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.321565 kubelet[2801]: E1106 00:31:16.321525 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.321565 kubelet[2801]: W1106 00:31:16.321554 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.321565 kubelet[2801]: E1106 00:31:16.321565 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.321852 kubelet[2801]: E1106 00:31:16.321815 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.321852 kubelet[2801]: W1106 00:31:16.321833 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.321852 kubelet[2801]: E1106 00:31:16.321843 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.322115 kubelet[2801]: E1106 00:31:16.322078 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.322173 kubelet[2801]: W1106 00:31:16.322097 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.322173 kubelet[2801]: E1106 00:31:16.322142 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.322488 kubelet[2801]: E1106 00:31:16.322448 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.322488 kubelet[2801]: W1106 00:31:16.322468 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.322573 kubelet[2801]: E1106 00:31:16.322481 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.323017 kubelet[2801]: E1106 00:31:16.322993 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.323017 kubelet[2801]: W1106 00:31:16.323010 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.323097 kubelet[2801]: E1106 00:31:16.323026 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.323546 kubelet[2801]: E1106 00:31:16.323502 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.323546 kubelet[2801]: W1106 00:31:16.323525 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.323546 kubelet[2801]: E1106 00:31:16.323538 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.324887 kubelet[2801]: E1106 00:31:16.324640 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.324887 kubelet[2801]: W1106 00:31:16.324673 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.324887 kubelet[2801]: E1106 00:31:16.324708 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.325194 kubelet[2801]: E1106 00:31:16.325143 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.325364 kubelet[2801]: W1106 00:31:16.325312 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.325486 kubelet[2801]: E1106 00:31:16.325464 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.327552 systemd[1]: Started cri-containerd-f36b2f03eb67440f3da5fece831e2c2e205517d25535e6190bcde69c5d1c9cd1.scope - libcontainer container f36b2f03eb67440f3da5fece831e2c2e205517d25535e6190bcde69c5d1c9cd1. Nov 6 00:31:16.328152 kubelet[2801]: E1106 00:31:16.328122 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.328224 kubelet[2801]: W1106 00:31:16.328147 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.328224 kubelet[2801]: E1106 00:31:16.328173 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.328730 kubelet[2801]: E1106 00:31:16.328602 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.328730 kubelet[2801]: W1106 00:31:16.328653 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.328730 kubelet[2801]: E1106 00:31:16.328670 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.329675 kubelet[2801]: E1106 00:31:16.329658 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.329768 kubelet[2801]: W1106 00:31:16.329755 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.329833 kubelet[2801]: E1106 00:31:16.329820 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.330400 kubelet[2801]: E1106 00:31:16.330380 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.330510 kubelet[2801]: W1106 00:31:16.330488 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.330611 kubelet[2801]: E1106 00:31:16.330592 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.332518 kubelet[2801]: E1106 00:31:16.332376 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.332518 kubelet[2801]: W1106 00:31:16.332399 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.332518 kubelet[2801]: E1106 00:31:16.332419 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.341063 kubelet[2801]: E1106 00:31:16.341002 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.341063 kubelet[2801]: W1106 00:31:16.341039 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.341063 kubelet[2801]: E1106 00:31:16.341067 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.341325 kubelet[2801]: I1106 00:31:16.341113 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3e3d7027-cc01-4677-b498-d2aaae1cd6f2-registration-dir\") pod \"csi-node-driver-xwlzm\" (UID: \"3e3d7027-cc01-4677-b498-d2aaae1cd6f2\") " pod="calico-system/csi-node-driver-xwlzm" Nov 6 00:31:16.341538 kubelet[2801]: E1106 00:31:16.341518 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.341538 kubelet[2801]: W1106 00:31:16.341534 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.341627 kubelet[2801]: E1106 00:31:16.341549 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.341627 kubelet[2801]: I1106 00:31:16.341574 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3e3d7027-cc01-4677-b498-d2aaae1cd6f2-varrun\") pod \"csi-node-driver-xwlzm\" (UID: \"3e3d7027-cc01-4677-b498-d2aaae1cd6f2\") " pod="calico-system/csi-node-driver-xwlzm" Nov 6 00:31:16.349641 kubelet[2801]: E1106 00:31:16.349573 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.349641 kubelet[2801]: W1106 00:31:16.349622 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.350012 kubelet[2801]: E1106 00:31:16.349668 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.350012 kubelet[2801]: I1106 00:31:16.349718 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3e3d7027-cc01-4677-b498-d2aaae1cd6f2-socket-dir\") pod \"csi-node-driver-xwlzm\" (UID: \"3e3d7027-cc01-4677-b498-d2aaae1cd6f2\") " pod="calico-system/csi-node-driver-xwlzm" Nov 6 00:31:16.351488 kubelet[2801]: E1106 00:31:16.351441 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.351673 kubelet[2801]: W1106 00:31:16.351584 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.351673 kubelet[2801]: E1106 00:31:16.351660 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.351978 kubelet[2801]: I1106 00:31:16.351708 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e3d7027-cc01-4677-b498-d2aaae1cd6f2-kubelet-dir\") pod \"csi-node-driver-xwlzm\" (UID: \"3e3d7027-cc01-4677-b498-d2aaae1cd6f2\") " pod="calico-system/csi-node-driver-xwlzm" Nov 6 00:31:16.352532 kubelet[2801]: E1106 00:31:16.352508 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:16.353114 kubelet[2801]: E1106 00:31:16.352611 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.353114 kubelet[2801]: W1106 00:31:16.352986 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.353114 kubelet[2801]: E1106 00:31:16.353063 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.356382 kubelet[2801]: E1106 00:31:16.355343 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.356382 kubelet[2801]: W1106 00:31:16.355363 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.356382 kubelet[2801]: E1106 00:31:16.355738 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.356382 kubelet[2801]: W1106 00:31:16.355750 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.356382 kubelet[2801]: E1106 00:31:16.356091 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.356382 kubelet[2801]: W1106 00:31:16.356104 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.356715 containerd[1616]: time="2025-11-06T00:31:16.353843380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lh9tf,Uid:bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892,Namespace:calico-system,Attempt:0,}" Nov 6 00:31:16.356766 kubelet[2801]: E1106 00:31:16.356390 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.356766 kubelet[2801]: E1106 00:31:16.356423 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.356766 kubelet[2801]: E1106 00:31:16.356438 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.356766 kubelet[2801]: I1106 00:31:16.356478 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxgzw\" (UniqueName: \"kubernetes.io/projected/3e3d7027-cc01-4677-b498-d2aaae1cd6f2-kube-api-access-bxgzw\") pod \"csi-node-driver-xwlzm\" (UID: \"3e3d7027-cc01-4677-b498-d2aaae1cd6f2\") " pod="calico-system/csi-node-driver-xwlzm" Nov 6 00:31:16.356766 kubelet[2801]: E1106 00:31:16.356495 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.356766 kubelet[2801]: W1106 00:31:16.356523 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.356766 kubelet[2801]: E1106 00:31:16.356636 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.357041 kubelet[2801]: E1106 00:31:16.356829 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.357041 kubelet[2801]: W1106 00:31:16.356840 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.357041 kubelet[2801]: E1106 00:31:16.356857 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.357460 kubelet[2801]: E1106 00:31:16.357426 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.357460 kubelet[2801]: W1106 00:31:16.357447 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.357460 kubelet[2801]: E1106 00:31:16.357462 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.357877 kubelet[2801]: E1106 00:31:16.357845 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.357877 kubelet[2801]: W1106 00:31:16.357862 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.357877 kubelet[2801]: E1106 00:31:16.357873 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.358219 kubelet[2801]: E1106 00:31:16.358191 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.358219 kubelet[2801]: W1106 00:31:16.358207 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.358219 kubelet[2801]: E1106 00:31:16.358216 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.358489 kubelet[2801]: E1106 00:31:16.358458 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.358489 kubelet[2801]: W1106 00:31:16.358474 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.358489 kubelet[2801]: E1106 00:31:16.358483 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.358733 kubelet[2801]: E1106 00:31:16.358705 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.358733 kubelet[2801]: W1106 00:31:16.358721 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.358733 kubelet[2801]: E1106 00:31:16.358730 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.432839 containerd[1616]: time="2025-11-06T00:31:16.432729082Z" level=info msg="connecting to shim c26698725bc81f10646d1bbf5570895792ff15b35f2c56ca64e9713eaa073f63" address="unix:///run/containerd/s/56d0b88b8a6461a5a7b0f8a5720fe767eb924cc5e3ed2e70e7779970ec47387f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:31:16.459596 kubelet[2801]: E1106 00:31:16.459478 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.459596 kubelet[2801]: W1106 00:31:16.459512 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.459596 kubelet[2801]: E1106 00:31:16.459563 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.460237 kubelet[2801]: E1106 00:31:16.459983 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.460237 kubelet[2801]: W1106 00:31:16.459995 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.460237 kubelet[2801]: E1106 00:31:16.460167 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.460820 kubelet[2801]: E1106 00:31:16.460745 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.460820 kubelet[2801]: W1106 00:31:16.460763 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.462143 kubelet[2801]: E1106 00:31:16.461103 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.462143 kubelet[2801]: E1106 00:31:16.461277 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.462143 kubelet[2801]: W1106 00:31:16.461288 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.462143 kubelet[2801]: E1106 00:31:16.461313 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.462143 kubelet[2801]: E1106 00:31:16.461673 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.462143 kubelet[2801]: W1106 00:31:16.461685 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.462143 kubelet[2801]: E1106 00:31:16.461708 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.462400 kubelet[2801]: E1106 00:31:16.462154 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.462400 kubelet[2801]: W1106 00:31:16.462168 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.462400 kubelet[2801]: E1106 00:31:16.462196 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.463360 kubelet[2801]: E1106 00:31:16.462630 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.463360 kubelet[2801]: W1106 00:31:16.462648 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.463360 kubelet[2801]: E1106 00:31:16.462709 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.463360 kubelet[2801]: E1106 00:31:16.463081 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.463360 kubelet[2801]: W1106 00:31:16.463092 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.463360 kubelet[2801]: E1106 00:31:16.463151 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.463599 kubelet[2801]: E1106 00:31:16.463512 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.463599 kubelet[2801]: W1106 00:31:16.463524 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.463599 kubelet[2801]: E1106 00:31:16.463595 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.464271 kubelet[2801]: E1106 00:31:16.464249 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.464271 kubelet[2801]: W1106 00:31:16.464268 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.464443 kubelet[2801]: E1106 00:31:16.464415 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.465305 kubelet[2801]: E1106 00:31:16.465259 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.465379 kubelet[2801]: W1106 00:31:16.465302 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.465379 kubelet[2801]: E1106 00:31:16.465344 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.465750 kubelet[2801]: E1106 00:31:16.465721 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.465750 kubelet[2801]: W1106 00:31:16.465740 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.466201 kubelet[2801]: E1106 00:31:16.466153 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.466201 kubelet[2801]: W1106 00:31:16.466172 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.466550 kubelet[2801]: E1106 00:31:16.466513 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.466550 kubelet[2801]: W1106 00:31:16.466533 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.467219 kubelet[2801]: E1106 00:31:16.467141 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.467219 kubelet[2801]: E1106 00:31:16.467189 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.467996 kubelet[2801]: E1106 00:31:16.467966 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.468703 kubelet[2801]: E1106 00:31:16.468661 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.468773 kubelet[2801]: W1106 00:31:16.468701 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.468773 kubelet[2801]: E1106 00:31:16.468750 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.472586 kubelet[2801]: E1106 00:31:16.472519 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.472863 kubelet[2801]: W1106 00:31:16.472549 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.472863 kubelet[2801]: E1106 00:31:16.472741 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.473127 kubelet[2801]: E1106 00:31:16.473110 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.473218 kubelet[2801]: W1106 00:31:16.473202 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.473409 kubelet[2801]: E1106 00:31:16.473384 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.473812 kubelet[2801]: E1106 00:31:16.473794 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.473965 kubelet[2801]: W1106 00:31:16.473882 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.474144 kubelet[2801]: E1106 00:31:16.474074 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.474606 kubelet[2801]: E1106 00:31:16.474589 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.474746 kubelet[2801]: W1106 00:31:16.474680 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.474837 kubelet[2801]: E1106 00:31:16.474817 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.475225 kubelet[2801]: E1106 00:31:16.475173 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.475225 kubelet[2801]: W1106 00:31:16.475206 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.475458 kubelet[2801]: E1106 00:31:16.475441 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.475774 kubelet[2801]: E1106 00:31:16.475758 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.475783 systemd[1]: Started cri-containerd-c26698725bc81f10646d1bbf5570895792ff15b35f2c56ca64e9713eaa073f63.scope - libcontainer container c26698725bc81f10646d1bbf5570895792ff15b35f2c56ca64e9713eaa073f63. Nov 6 00:31:16.476165 kubelet[2801]: W1106 00:31:16.475969 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.476339 kubelet[2801]: E1106 00:31:16.476299 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.477045 kubelet[2801]: E1106 00:31:16.476995 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.477045 kubelet[2801]: W1106 00:31:16.477012 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.477296 kubelet[2801]: E1106 00:31:16.477254 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.477685 kubelet[2801]: E1106 00:31:16.477668 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.477851 kubelet[2801]: W1106 00:31:16.477765 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.477957 kubelet[2801]: E1106 00:31:16.477922 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.479454 kubelet[2801]: E1106 00:31:16.479432 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.479542 kubelet[2801]: W1106 00:31:16.479526 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.479725 kubelet[2801]: E1106 00:31:16.479680 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.482068 containerd[1616]: time="2025-11-06T00:31:16.481992536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67457dfc98-r2wp9,Uid:b1ae62c8-f069-4b30-b650-6ad7c6a1aece,Namespace:calico-system,Attempt:0,} returns sandbox id \"f36b2f03eb67440f3da5fece831e2c2e205517d25535e6190bcde69c5d1c9cd1\"" Nov 6 00:31:16.482632 kubelet[2801]: E1106 00:31:16.482540 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.482707 kubelet[2801]: W1106 00:31:16.482631 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.482707 kubelet[2801]: E1106 00:31:16.482699 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.491606 kubelet[2801]: E1106 00:31:16.491552 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:16.492841 containerd[1616]: time="2025-11-06T00:31:16.492799585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 6 00:31:16.520665 kubelet[2801]: E1106 00:31:16.520502 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:16.520665 kubelet[2801]: W1106 00:31:16.520544 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:16.520665 kubelet[2801]: E1106 00:31:16.520571 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:16.581963 containerd[1616]: time="2025-11-06T00:31:16.581794597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lh9tf,Uid:bb8e49a1-f7ea-4949-a4e5-b6eb47fb9892,Namespace:calico-system,Attempt:0,} returns sandbox id \"c26698725bc81f10646d1bbf5570895792ff15b35f2c56ca64e9713eaa073f63\"" Nov 6 00:31:16.585400 kubelet[2801]: E1106 00:31:16.584017 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:17.952879 kubelet[2801]: E1106 00:31:17.952680 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:31:19.954884 kubelet[2801]: E1106 00:31:19.954823 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:31:20.116430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657098729.mount: Deactivated successfully. Nov 6 00:31:21.953687 kubelet[2801]: E1106 00:31:21.952389 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:31:22.224933 containerd[1616]: time="2025-11-06T00:31:22.224166028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:31:22.225471 containerd[1616]: time="2025-11-06T00:31:22.225127633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 6 00:31:22.227061 containerd[1616]: time="2025-11-06T00:31:22.226979627Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:31:22.233212 containerd[1616]: time="2025-11-06T00:31:22.231522741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:31:22.234502 containerd[1616]: time="2025-11-06T00:31:22.234442068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 5.741600564s" Nov 6 00:31:22.234502 containerd[1616]: time="2025-11-06T00:31:22.234492993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 6 00:31:22.243580 containerd[1616]: time="2025-11-06T00:31:22.241312298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 6 00:31:22.273680 containerd[1616]: time="2025-11-06T00:31:22.273191347Z" level=info msg="CreateContainer within sandbox \"f36b2f03eb67440f3da5fece831e2c2e205517d25535e6190bcde69c5d1c9cd1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 6 00:31:22.289537 containerd[1616]: time="2025-11-06T00:31:22.289431146Z" level=info msg="Container eb1f63f6106de3f4f0647796e7d4d818f484409379bb75fb2f71b9aa96de655d: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:31:22.309818 containerd[1616]: time="2025-11-06T00:31:22.309721884Z" level=info msg="CreateContainer within sandbox \"f36b2f03eb67440f3da5fece831e2c2e205517d25535e6190bcde69c5d1c9cd1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"eb1f63f6106de3f4f0647796e7d4d818f484409379bb75fb2f71b9aa96de655d\"" Nov 6 00:31:22.314880 containerd[1616]: time="2025-11-06T00:31:22.314800813Z" level=info msg="StartContainer for \"eb1f63f6106de3f4f0647796e7d4d818f484409379bb75fb2f71b9aa96de655d\"" Nov 6 00:31:22.321149 containerd[1616]: time="2025-11-06T00:31:22.320786944Z" level=info msg="connecting to shim eb1f63f6106de3f4f0647796e7d4d818f484409379bb75fb2f71b9aa96de655d" address="unix:///run/containerd/s/1c39c2fb81d5cd3cfdd40bb8e5ed0468efcdb20f19a7c56f55e3d4de6258127f" protocol=ttrpc version=3 Nov 6 00:31:22.408841 systemd[1]: Started cri-containerd-eb1f63f6106de3f4f0647796e7d4d818f484409379bb75fb2f71b9aa96de655d.scope - libcontainer container eb1f63f6106de3f4f0647796e7d4d818f484409379bb75fb2f71b9aa96de655d. Nov 6 00:31:22.565791 containerd[1616]: time="2025-11-06T00:31:22.565442927Z" level=info msg="StartContainer for \"eb1f63f6106de3f4f0647796e7d4d818f484409379bb75fb2f71b9aa96de655d\" returns successfully" Nov 6 00:31:23.228000 kubelet[2801]: E1106 00:31:23.227906 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:23.232341 kubelet[2801]: E1106 00:31:23.232292 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.232341 kubelet[2801]: W1106 00:31:23.232334 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.232341 kubelet[2801]: E1106 00:31:23.232365 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.235152 kubelet[2801]: E1106 00:31:23.235126 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.235152 kubelet[2801]: W1106 00:31:23.235146 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.235152 kubelet[2801]: E1106 00:31:23.235166 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.235455 kubelet[2801]: E1106 00:31:23.235429 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.235455 kubelet[2801]: W1106 00:31:23.235448 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.235546 kubelet[2801]: E1106 00:31:23.235460 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.235750 kubelet[2801]: E1106 00:31:23.235729 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.235750 kubelet[2801]: W1106 00:31:23.235745 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.235750 kubelet[2801]: E1106 00:31:23.235759 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.236021 kubelet[2801]: E1106 00:31:23.235980 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.236021 kubelet[2801]: W1106 00:31:23.235997 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.236021 kubelet[2801]: E1106 00:31:23.236007 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.236239 kubelet[2801]: E1106 00:31:23.236176 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.236239 kubelet[2801]: W1106 00:31:23.236184 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.236239 kubelet[2801]: E1106 00:31:23.236192 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.236449 kubelet[2801]: E1106 00:31:23.236354 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.236449 kubelet[2801]: W1106 00:31:23.236362 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.236449 kubelet[2801]: E1106 00:31:23.236370 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.236642 kubelet[2801]: E1106 00:31:23.236530 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.236642 kubelet[2801]: W1106 00:31:23.236539 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.236642 kubelet[2801]: E1106 00:31:23.236549 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.236864 kubelet[2801]: E1106 00:31:23.236768 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.236864 kubelet[2801]: W1106 00:31:23.236778 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.236864 kubelet[2801]: E1106 00:31:23.236787 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.237251 kubelet[2801]: E1106 00:31:23.236975 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.237251 kubelet[2801]: W1106 00:31:23.236985 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.237251 kubelet[2801]: E1106 00:31:23.237006 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.237251 kubelet[2801]: E1106 00:31:23.237222 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.237251 kubelet[2801]: W1106 00:31:23.237236 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.237251 kubelet[2801]: E1106 00:31:23.237248 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.237677 kubelet[2801]: E1106 00:31:23.237469 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.237677 kubelet[2801]: W1106 00:31:23.237481 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.237677 kubelet[2801]: E1106 00:31:23.237494 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.237888 kubelet[2801]: E1106 00:31:23.237724 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.237888 kubelet[2801]: W1106 00:31:23.237735 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.237888 kubelet[2801]: E1106 00:31:23.237746 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.246177 kubelet[2801]: E1106 00:31:23.240050 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.246177 kubelet[2801]: W1106 00:31:23.240072 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.246177 kubelet[2801]: E1106 00:31:23.240091 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.246177 kubelet[2801]: E1106 00:31:23.243361 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.246177 kubelet[2801]: W1106 00:31:23.243375 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.246177 kubelet[2801]: E1106 00:31:23.243392 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.262963 kubelet[2801]: E1106 00:31:23.262897 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.262963 kubelet[2801]: W1106 00:31:23.262931 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.262963 kubelet[2801]: E1106 00:31:23.262972 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.263355 kubelet[2801]: E1106 00:31:23.263315 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.263355 kubelet[2801]: W1106 00:31:23.263333 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.263447 kubelet[2801]: E1106 00:31:23.263363 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.264004 kubelet[2801]: E1106 00:31:23.263957 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.264071 kubelet[2801]: W1106 00:31:23.264006 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.264071 kubelet[2801]: E1106 00:31:23.264028 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.264611 kubelet[2801]: E1106 00:31:23.264387 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.264611 kubelet[2801]: W1106 00:31:23.264408 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.264611 kubelet[2801]: E1106 00:31:23.264444 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.264705 kubelet[2801]: E1106 00:31:23.264695 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.264737 kubelet[2801]: W1106 00:31:23.264706 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.264960 kubelet[2801]: E1106 00:31:23.264923 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.265144 kubelet[2801]: W1106 00:31:23.265055 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.265477 kubelet[2801]: E1106 00:31:23.265367 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.265477 kubelet[2801]: W1106 00:31:23.265400 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.265477 kubelet[2801]: E1106 00:31:23.265413 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.265816 kubelet[2801]: E1106 00:31:23.265724 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.265816 kubelet[2801]: W1106 00:31:23.265736 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.265816 kubelet[2801]: E1106 00:31:23.265746 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.266165 kubelet[2801]: E1106 00:31:23.266103 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.266165 kubelet[2801]: E1106 00:31:23.266137 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.266485 kubelet[2801]: E1106 00:31:23.266465 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.266659 kubelet[2801]: W1106 00:31:23.266570 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.266659 kubelet[2801]: E1106 00:31:23.266596 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.266915 kubelet[2801]: E1106 00:31:23.266839 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.266915 kubelet[2801]: W1106 00:31:23.266850 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.266915 kubelet[2801]: E1106 00:31:23.266863 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.267363 kubelet[2801]: E1106 00:31:23.267333 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.267363 kubelet[2801]: W1106 00:31:23.267360 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.267497 kubelet[2801]: E1106 00:31:23.267380 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.267629 kubelet[2801]: E1106 00:31:23.267622 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.267662 kubelet[2801]: W1106 00:31:23.267633 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.268194 kubelet[2801]: E1106 00:31:23.267731 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.268194 kubelet[2801]: E1106 00:31:23.267864 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.268194 kubelet[2801]: W1106 00:31:23.267876 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.268194 kubelet[2801]: E1106 00:31:23.268010 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.268375 kubelet[2801]: E1106 00:31:23.268320 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.268375 kubelet[2801]: W1106 00:31:23.268332 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.268375 kubelet[2801]: E1106 00:31:23.268347 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.268606 kubelet[2801]: E1106 00:31:23.268569 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.268606 kubelet[2801]: W1106 00:31:23.268591 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.268606 kubelet[2801]: E1106 00:31:23.268608 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.268888 kubelet[2801]: E1106 00:31:23.268855 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.268888 kubelet[2801]: W1106 00:31:23.268872 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.268888 kubelet[2801]: E1106 00:31:23.268889 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.269307 kubelet[2801]: E1106 00:31:23.269273 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.269307 kubelet[2801]: W1106 00:31:23.269291 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.269307 kubelet[2801]: E1106 00:31:23.269309 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.269533 kubelet[2801]: E1106 00:31:23.269501 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:23.269533 kubelet[2801]: W1106 00:31:23.269518 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:23.269533 kubelet[2801]: E1106 00:31:23.269530 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:23.291503 kubelet[2801]: I1106 00:31:23.290716 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67457dfc98-r2wp9" podStartSLOduration=2.539975097 podStartE2EDuration="8.284259876s" podCreationTimestamp="2025-11-06 00:31:15 +0000 UTC" firstStartedPulling="2025-11-06 00:31:16.49232738 +0000 UTC m=+30.028455879" lastFinishedPulling="2025-11-06 00:31:22.236612159 +0000 UTC m=+35.772740658" observedRunningTime="2025-11-06 00:31:23.281814799 +0000 UTC m=+36.817943298" watchObservedRunningTime="2025-11-06 00:31:23.284259876 +0000 UTC m=+36.820388375" Nov 6 00:31:23.951625 kubelet[2801]: E1106 00:31:23.951543 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:31:24.223534 containerd[1616]: time="2025-11-06T00:31:24.223342545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:31:24.236015 kubelet[2801]: I1106 00:31:24.234930 2801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:31:24.236776 kubelet[2801]: E1106 00:31:24.236724 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:24.237640 containerd[1616]: time="2025-11-06T00:31:24.237576381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 6 00:31:24.242097 containerd[1616]: time="2025-11-06T00:31:24.241974952Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:31:24.248568 containerd[1616]: time="2025-11-06T00:31:24.247745220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:31:24.248568 containerd[1616]: time="2025-11-06T00:31:24.248381101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.007000715s" Nov 6 00:31:24.248568 containerd[1616]: time="2025-11-06T00:31:24.248434616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 6 00:31:24.252513 containerd[1616]: time="2025-11-06T00:31:24.252454798Z" level=info msg="CreateContainer within sandbox \"c26698725bc81f10646d1bbf5570895792ff15b35f2c56ca64e9713eaa073f63\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 6 00:31:24.259554 kubelet[2801]: E1106 00:31:24.259455 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.259554 kubelet[2801]: W1106 00:31:24.259489 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.259554 kubelet[2801]: E1106 00:31:24.259519 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.262182 kubelet[2801]: E1106 00:31:24.260021 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.262182 kubelet[2801]: W1106 00:31:24.260035 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.262182 kubelet[2801]: E1106 00:31:24.260047 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.262182 kubelet[2801]: E1106 00:31:24.261149 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.262182 kubelet[2801]: W1106 00:31:24.261163 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.262182 kubelet[2801]: E1106 00:31:24.261179 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.262182 kubelet[2801]: E1106 00:31:24.261471 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.262182 kubelet[2801]: W1106 00:31:24.261481 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.262182 kubelet[2801]: E1106 00:31:24.261492 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.262182 kubelet[2801]: E1106 00:31:24.261783 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.262597 kubelet[2801]: W1106 00:31:24.261819 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.262597 kubelet[2801]: E1106 00:31:24.261831 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.262597 kubelet[2801]: E1106 00:31:24.262163 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.262597 kubelet[2801]: W1106 00:31:24.262177 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.262597 kubelet[2801]: E1106 00:31:24.262189 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.264192 kubelet[2801]: E1106 00:31:24.264163 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.264192 kubelet[2801]: W1106 00:31:24.264187 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.264299 kubelet[2801]: E1106 00:31:24.264201 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.264736 kubelet[2801]: E1106 00:31:24.264604 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.264736 kubelet[2801]: W1106 00:31:24.264625 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.264736 kubelet[2801]: E1106 00:31:24.264637 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.265131 kubelet[2801]: E1106 00:31:24.264973 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.265623 kubelet[2801]: W1106 00:31:24.265560 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.265623 kubelet[2801]: E1106 00:31:24.265587 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.266164 kubelet[2801]: E1106 00:31:24.266093 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.266164 kubelet[2801]: W1106 00:31:24.266126 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.266164 kubelet[2801]: E1106 00:31:24.266161 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.270058 kubelet[2801]: E1106 00:31:24.267017 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.270058 kubelet[2801]: W1106 00:31:24.267088 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.270058 kubelet[2801]: E1106 00:31:24.267103 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.275758 kubelet[2801]: E1106 00:31:24.270531 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.275758 kubelet[2801]: W1106 00:31:24.270587 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.275758 kubelet[2801]: E1106 00:31:24.270607 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.275758 kubelet[2801]: E1106 00:31:24.271075 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.275758 kubelet[2801]: W1106 00:31:24.271089 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.275758 kubelet[2801]: E1106 00:31:24.271101 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.275758 kubelet[2801]: E1106 00:31:24.274183 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.275758 kubelet[2801]: W1106 00:31:24.274197 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.275758 kubelet[2801]: E1106 00:31:24.274210 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.276528 kubelet[2801]: E1106 00:31:24.276485 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.276528 kubelet[2801]: W1106 00:31:24.276508 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.276528 kubelet[2801]: E1106 00:31:24.276528 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.279328 kubelet[2801]: E1106 00:31:24.279291 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.279536 kubelet[2801]: W1106 00:31:24.279497 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.279536 kubelet[2801]: E1106 00:31:24.279518 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.280481 kubelet[2801]: E1106 00:31:24.280457 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.280481 kubelet[2801]: W1106 00:31:24.280477 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.280593 kubelet[2801]: E1106 00:31:24.280498 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.282275 containerd[1616]: time="2025-11-06T00:31:24.281288726Z" level=info msg="Container 275d48e288c8e22e07f1ae33183e37df88123f16a320908f687f423709c257b3: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:31:24.282567 kubelet[2801]: E1106 00:31:24.282545 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.282567 kubelet[2801]: W1106 00:31:24.282562 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.282657 kubelet[2801]: E1106 00:31:24.282583 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.282886 kubelet[2801]: E1106 00:31:24.282865 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.282886 kubelet[2801]: W1106 00:31:24.282881 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.284344 kubelet[2801]: E1106 00:31:24.284022 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.284745 kubelet[2801]: E1106 00:31:24.284614 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.284745 kubelet[2801]: W1106 00:31:24.284633 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.284903 kubelet[2801]: E1106 00:31:24.284876 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.285013 kubelet[2801]: E1106 00:31:24.284973 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.285013 kubelet[2801]: W1106 00:31:24.284983 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.285158 kubelet[2801]: E1106 00:31:24.285121 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.285253 kubelet[2801]: E1106 00:31:24.285232 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.285253 kubelet[2801]: W1106 00:31:24.285247 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.285321 kubelet[2801]: E1106 00:31:24.285271 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.285611 kubelet[2801]: E1106 00:31:24.285586 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.285657 kubelet[2801]: W1106 00:31:24.285616 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.285657 kubelet[2801]: E1106 00:31:24.285648 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.286018 kubelet[2801]: E1106 00:31:24.286001 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.286018 kubelet[2801]: W1106 00:31:24.286014 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.286105 kubelet[2801]: E1106 00:31:24.286034 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.286701 kubelet[2801]: E1106 00:31:24.286657 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.286701 kubelet[2801]: W1106 00:31:24.286683 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.286701 kubelet[2801]: E1106 00:31:24.286705 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.287053 kubelet[2801]: E1106 00:31:24.286969 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.287053 kubelet[2801]: W1106 00:31:24.286981 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.287254 kubelet[2801]: E1106 00:31:24.287177 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.287254 kubelet[2801]: E1106 00:31:24.287207 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.287254 kubelet[2801]: W1106 00:31:24.287218 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.287554 kubelet[2801]: E1106 00:31:24.287297 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.287554 kubelet[2801]: E1106 00:31:24.287446 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.287554 kubelet[2801]: W1106 00:31:24.287456 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.287554 kubelet[2801]: E1106 00:31:24.287474 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.287762 kubelet[2801]: E1106 00:31:24.287727 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.287762 kubelet[2801]: W1106 00:31:24.287744 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.287762 kubelet[2801]: E1106 00:31:24.287760 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.288160 kubelet[2801]: E1106 00:31:24.288127 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.288160 kubelet[2801]: W1106 00:31:24.288140 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.288160 kubelet[2801]: E1106 00:31:24.288153 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.288730 kubelet[2801]: E1106 00:31:24.288458 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.288730 kubelet[2801]: W1106 00:31:24.288477 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.288730 kubelet[2801]: E1106 00:31:24.288497 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.288851 kubelet[2801]: E1106 00:31:24.288810 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.288964 kubelet[2801]: W1106 00:31:24.288825 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.289121 kubelet[2801]: E1106 00:31:24.289043 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.289283 kubelet[2801]: E1106 00:31:24.289271 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:31:24.289283 kubelet[2801]: W1106 00:31:24.289283 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:31:24.289347 kubelet[2801]: E1106 00:31:24.289297 2801 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:31:24.303583 containerd[1616]: time="2025-11-06T00:31:24.303518167Z" level=info msg="CreateContainer within sandbox \"c26698725bc81f10646d1bbf5570895792ff15b35f2c56ca64e9713eaa073f63\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"275d48e288c8e22e07f1ae33183e37df88123f16a320908f687f423709c257b3\"" Nov 6 00:31:24.304340 containerd[1616]: time="2025-11-06T00:31:24.304316987Z" level=info msg="StartContainer for \"275d48e288c8e22e07f1ae33183e37df88123f16a320908f687f423709c257b3\"" Nov 6 00:31:24.312810 containerd[1616]: time="2025-11-06T00:31:24.312750557Z" level=info msg="connecting to shim 275d48e288c8e22e07f1ae33183e37df88123f16a320908f687f423709c257b3" address="unix:///run/containerd/s/56d0b88b8a6461a5a7b0f8a5720fe767eb924cc5e3ed2e70e7779970ec47387f" protocol=ttrpc version=3 Nov 6 00:31:24.385978 systemd[1]: Started cri-containerd-275d48e288c8e22e07f1ae33183e37df88123f16a320908f687f423709c257b3.scope - libcontainer container 275d48e288c8e22e07f1ae33183e37df88123f16a320908f687f423709c257b3. Nov 6 00:31:24.506737 systemd[1]: cri-containerd-275d48e288c8e22e07f1ae33183e37df88123f16a320908f687f423709c257b3.scope: Deactivated successfully. Nov 6 00:31:24.507388 systemd[1]: cri-containerd-275d48e288c8e22e07f1ae33183e37df88123f16a320908f687f423709c257b3.scope: Consumed 70ms CPU time, 6.3M memory peak, 2.6M written to disk. Nov 6 00:31:24.512203 containerd[1616]: time="2025-11-06T00:31:24.510835249Z" level=info msg="TaskExit event in podsandbox handler container_id:\"275d48e288c8e22e07f1ae33183e37df88123f16a320908f687f423709c257b3\" id:\"275d48e288c8e22e07f1ae33183e37df88123f16a320908f687f423709c257b3\" pid:3520 exited_at:{seconds:1762389084 nanos:510164028}" Nov 6 00:31:24.535457 containerd[1616]: time="2025-11-06T00:31:24.535266390Z" level=info msg="received exit event container_id:\"275d48e288c8e22e07f1ae33183e37df88123f16a320908f687f423709c257b3\" id:\"275d48e288c8e22e07f1ae33183e37df88123f16a320908f687f423709c257b3\" pid:3520 exited_at:{seconds:1762389084 nanos:510164028}" Nov 6 00:31:24.543627 containerd[1616]: time="2025-11-06T00:31:24.543545308Z" level=info msg="StartContainer for \"275d48e288c8e22e07f1ae33183e37df88123f16a320908f687f423709c257b3\" returns successfully" Nov 6 00:31:24.605805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-275d48e288c8e22e07f1ae33183e37df88123f16a320908f687f423709c257b3-rootfs.mount: Deactivated successfully. Nov 6 00:31:25.247412 kubelet[2801]: E1106 00:31:25.246507 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:25.951399 kubelet[2801]: E1106 00:31:25.951286 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:31:26.258059 kubelet[2801]: E1106 00:31:26.257894 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:26.263256 containerd[1616]: time="2025-11-06T00:31:26.262765651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 6 00:31:27.952976 kubelet[2801]: E1106 00:31:27.952505 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:31:29.951828 kubelet[2801]: E1106 00:31:29.951366 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:31:31.951594 kubelet[2801]: E1106 00:31:31.951036 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:31:33.951469 kubelet[2801]: E1106 00:31:33.951359 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:31:34.145979 containerd[1616]: time="2025-11-06T00:31:34.145683918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:31:34.183891 containerd[1616]: time="2025-11-06T00:31:34.183794168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 6 00:31:34.256607 containerd[1616]: time="2025-11-06T00:31:34.256397582Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:31:34.333401 containerd[1616]: time="2025-11-06T00:31:34.330264580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:31:34.333401 containerd[1616]: time="2025-11-06T00:31:34.331126467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 8.068313843s" Nov 6 00:31:34.338094 containerd[1616]: time="2025-11-06T00:31:34.335399111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 6 00:31:34.357006 containerd[1616]: time="2025-11-06T00:31:34.356317439Z" level=info msg="CreateContainer within sandbox \"c26698725bc81f10646d1bbf5570895792ff15b35f2c56ca64e9713eaa073f63\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 6 00:31:34.447439 containerd[1616]: time="2025-11-06T00:31:34.447324688Z" level=info msg="Container 54832740e99d0a6f69e4f95fe8ba015580861d94528de795698219c4b97a4120: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:31:34.523458 containerd[1616]: time="2025-11-06T00:31:34.523260157Z" level=info msg="CreateContainer within sandbox \"c26698725bc81f10646d1bbf5570895792ff15b35f2c56ca64e9713eaa073f63\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"54832740e99d0a6f69e4f95fe8ba015580861d94528de795698219c4b97a4120\"" Nov 6 00:31:34.529916 containerd[1616]: time="2025-11-06T00:31:34.529714961Z" level=info msg="StartContainer for \"54832740e99d0a6f69e4f95fe8ba015580861d94528de795698219c4b97a4120\"" Nov 6 00:31:34.531908 containerd[1616]: time="2025-11-06T00:31:34.531854199Z" level=info msg="connecting to shim 54832740e99d0a6f69e4f95fe8ba015580861d94528de795698219c4b97a4120" address="unix:///run/containerd/s/56d0b88b8a6461a5a7b0f8a5720fe767eb924cc5e3ed2e70e7779970ec47387f" protocol=ttrpc version=3 Nov 6 00:31:34.592492 systemd[1]: Started cri-containerd-54832740e99d0a6f69e4f95fe8ba015580861d94528de795698219c4b97a4120.scope - libcontainer container 54832740e99d0a6f69e4f95fe8ba015580861d94528de795698219c4b97a4120. Nov 6 00:31:34.762477 containerd[1616]: time="2025-11-06T00:31:34.762394523Z" level=info msg="StartContainer for \"54832740e99d0a6f69e4f95fe8ba015580861d94528de795698219c4b97a4120\" returns successfully" Nov 6 00:31:35.321525 kubelet[2801]: E1106 00:31:35.321049 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:35.952132 kubelet[2801]: E1106 00:31:35.951206 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:31:36.327471 kubelet[2801]: E1106 00:31:36.327255 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:36.549579 kubelet[2801]: I1106 00:31:36.548711 2801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:31:36.549579 kubelet[2801]: E1106 00:31:36.549154 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:37.331837 kubelet[2801]: E1106 00:31:37.331778 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:37.953011 kubelet[2801]: E1106 00:31:37.952816 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:31:38.287851 systemd[1]: cri-containerd-54832740e99d0a6f69e4f95fe8ba015580861d94528de795698219c4b97a4120.scope: Deactivated successfully. Nov 6 00:31:38.291017 containerd[1616]: time="2025-11-06T00:31:38.289294419Z" level=info msg="received exit event container_id:\"54832740e99d0a6f69e4f95fe8ba015580861d94528de795698219c4b97a4120\" id:\"54832740e99d0a6f69e4f95fe8ba015580861d94528de795698219c4b97a4120\" pid:3577 exited_at:{seconds:1762389098 nanos:288999330}" Nov 6 00:31:38.292255 systemd[1]: cri-containerd-54832740e99d0a6f69e4f95fe8ba015580861d94528de795698219c4b97a4120.scope: Consumed 983ms CPU time, 179.4M memory peak, 2.2M read from disk, 171.3M written to disk. Nov 6 00:31:38.349674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54832740e99d0a6f69e4f95fe8ba015580861d94528de795698219c4b97a4120-rootfs.mount: Deactivated successfully. Nov 6 00:31:38.384763 containerd[1616]: time="2025-11-06T00:31:38.384613648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"54832740e99d0a6f69e4f95fe8ba015580861d94528de795698219c4b97a4120\" id:\"54832740e99d0a6f69e4f95fe8ba015580861d94528de795698219c4b97a4120\" pid:3577 exited_at:{seconds:1762389098 nanos:288999330}" Nov 6 00:31:38.418868 kubelet[2801]: I1106 00:31:38.385200 2801 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 00:31:38.739661 kubelet[2801]: I1106 00:31:38.738680 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69hwr\" (UniqueName: \"kubernetes.io/projected/e8483869-a3c9-4d7b-858a-1505af0fb5d9-kube-api-access-69hwr\") pod \"goldmane-666569f655-v9zqm\" (UID: \"e8483869-a3c9-4d7b-858a-1505af0fb5d9\") " pod="calico-system/goldmane-666569f655-v9zqm" Nov 6 00:31:38.739661 kubelet[2801]: I1106 00:31:38.738732 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e8483869-a3c9-4d7b-858a-1505af0fb5d9-goldmane-key-pair\") pod \"goldmane-666569f655-v9zqm\" (UID: \"e8483869-a3c9-4d7b-858a-1505af0fb5d9\") " pod="calico-system/goldmane-666569f655-v9zqm" Nov 6 00:31:38.739661 kubelet[2801]: I1106 00:31:38.738758 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0228b1a2-410c-40ab-86ee-d344f8e34170-calico-apiserver-certs\") pod \"calico-apiserver-6977b59f7b-5thqk\" (UID: \"0228b1a2-410c-40ab-86ee-d344f8e34170\") " pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" Nov 6 00:31:38.739661 kubelet[2801]: I1106 00:31:38.738778 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdhnz\" (UniqueName: \"kubernetes.io/projected/0228b1a2-410c-40ab-86ee-d344f8e34170-kube-api-access-rdhnz\") pod \"calico-apiserver-6977b59f7b-5thqk\" (UID: \"0228b1a2-410c-40ab-86ee-d344f8e34170\") " pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" Nov 6 00:31:38.739661 kubelet[2801]: I1106 00:31:38.739040 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8483869-a3c9-4d7b-858a-1505af0fb5d9-config\") pod \"goldmane-666569f655-v9zqm\" (UID: \"e8483869-a3c9-4d7b-858a-1505af0fb5d9\") " pod="calico-system/goldmane-666569f655-v9zqm" Nov 6 00:31:38.739970 kubelet[2801]: I1106 00:31:38.739109 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8483869-a3c9-4d7b-858a-1505af0fb5d9-goldmane-ca-bundle\") pod \"goldmane-666569f655-v9zqm\" (UID: \"e8483869-a3c9-4d7b-858a-1505af0fb5d9\") " pod="calico-system/goldmane-666569f655-v9zqm" Nov 6 00:31:38.797880 systemd[1]: Created slice kubepods-besteffort-pode8483869_a3c9_4d7b_858a_1505af0fb5d9.slice - libcontainer container kubepods-besteffort-pode8483869_a3c9_4d7b_858a_1505af0fb5d9.slice. Nov 6 00:31:38.817744 systemd[1]: Created slice kubepods-besteffort-pod0228b1a2_410c_40ab_86ee_d344f8e34170.slice - libcontainer container kubepods-besteffort-pod0228b1a2_410c_40ab_86ee_d344f8e34170.slice. Nov 6 00:31:38.836390 systemd[1]: Created slice kubepods-besteffort-pod3eac792e_30de_470b_8978_364680127235.slice - libcontainer container kubepods-besteffort-pod3eac792e_30de_470b_8978_364680127235.slice. Nov 6 00:31:38.840761 kubelet[2801]: I1106 00:31:38.839425 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52f5f93c-5f24-4f92-88a3-401da8e7e300-tigera-ca-bundle\") pod \"calico-kube-controllers-589569c468-rpmp8\" (UID: \"52f5f93c-5f24-4f92-88a3-401da8e7e300\") " pod="calico-system/calico-kube-controllers-589569c468-rpmp8" Nov 6 00:31:38.840761 kubelet[2801]: I1106 00:31:38.839466 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4977dfeb-b401-43e8-996c-8b0f6fd603a7-calico-apiserver-certs\") pod \"calico-apiserver-7bb775b858-snbnl\" (UID: \"4977dfeb-b401-43e8-996c-8b0f6fd603a7\") " pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" Nov 6 00:31:38.843729 kubelet[2801]: I1106 00:31:38.839492 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82f74817-8797-47bc-b585-b333aafdc3bf-config-volume\") pod \"coredns-668d6bf9bc-2tndd\" (UID: \"82f74817-8797-47bc-b585-b333aafdc3bf\") " pod="kube-system/coredns-668d6bf9bc-2tndd" Nov 6 00:31:38.843729 kubelet[2801]: I1106 00:31:38.841852 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3eac792e-30de-470b-8978-364680127235-whisker-ca-bundle\") pod \"whisker-d4d4f7875-ldjcd\" (UID: \"3eac792e-30de-470b-8978-364680127235\") " pod="calico-system/whisker-d4d4f7875-ldjcd" Nov 6 00:31:38.843729 kubelet[2801]: I1106 00:31:38.841899 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xgbg\" (UniqueName: \"kubernetes.io/projected/4977dfeb-b401-43e8-996c-8b0f6fd603a7-kube-api-access-7xgbg\") pod \"calico-apiserver-7bb775b858-snbnl\" (UID: \"4977dfeb-b401-43e8-996c-8b0f6fd603a7\") " pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" Nov 6 00:31:38.843729 kubelet[2801]: I1106 00:31:38.841950 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qksrx\" (UniqueName: \"kubernetes.io/projected/23a4e2d6-7e35-4d28-a47f-d87913358f1f-kube-api-access-qksrx\") pod \"calico-apiserver-7bb775b858-p9q9w\" (UID: \"23a4e2d6-7e35-4d28-a47f-d87913358f1f\") " pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" Nov 6 00:31:38.843729 kubelet[2801]: I1106 00:31:38.842002 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjznn\" (UniqueName: \"kubernetes.io/projected/3eac792e-30de-470b-8978-364680127235-kube-api-access-rjznn\") pod \"whisker-d4d4f7875-ldjcd\" (UID: \"3eac792e-30de-470b-8978-364680127235\") " pod="calico-system/whisker-d4d4f7875-ldjcd" Nov 6 00:31:38.843990 kubelet[2801]: I1106 00:31:38.842022 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1313b9d-cae7-480b-9dd6-87cba17dd41d-config-volume\") pod \"coredns-668d6bf9bc-wggz5\" (UID: \"a1313b9d-cae7-480b-9dd6-87cba17dd41d\") " pod="kube-system/coredns-668d6bf9bc-wggz5" Nov 6 00:31:38.843990 kubelet[2801]: I1106 00:31:38.842080 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvq27\" (UniqueName: \"kubernetes.io/projected/82f74817-8797-47bc-b585-b333aafdc3bf-kube-api-access-rvq27\") pod \"coredns-668d6bf9bc-2tndd\" (UID: \"82f74817-8797-47bc-b585-b333aafdc3bf\") " pod="kube-system/coredns-668d6bf9bc-2tndd" Nov 6 00:31:38.843990 kubelet[2801]: I1106 00:31:38.842121 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/23a4e2d6-7e35-4d28-a47f-d87913358f1f-calico-apiserver-certs\") pod \"calico-apiserver-7bb775b858-p9q9w\" (UID: \"23a4e2d6-7e35-4d28-a47f-d87913358f1f\") " pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" Nov 6 00:31:38.843990 kubelet[2801]: I1106 00:31:38.842209 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnbfb\" (UniqueName: \"kubernetes.io/projected/a1313b9d-cae7-480b-9dd6-87cba17dd41d-kube-api-access-mnbfb\") pod \"coredns-668d6bf9bc-wggz5\" (UID: \"a1313b9d-cae7-480b-9dd6-87cba17dd41d\") " pod="kube-system/coredns-668d6bf9bc-wggz5" Nov 6 00:31:38.843990 kubelet[2801]: I1106 00:31:38.842247 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtjpk\" (UniqueName: \"kubernetes.io/projected/52f5f93c-5f24-4f92-88a3-401da8e7e300-kube-api-access-gtjpk\") pod \"calico-kube-controllers-589569c468-rpmp8\" (UID: \"52f5f93c-5f24-4f92-88a3-401da8e7e300\") " pod="calico-system/calico-kube-controllers-589569c468-rpmp8" Nov 6 00:31:38.844170 kubelet[2801]: I1106 00:31:38.842271 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3eac792e-30de-470b-8978-364680127235-whisker-backend-key-pair\") pod \"whisker-d4d4f7875-ldjcd\" (UID: \"3eac792e-30de-470b-8978-364680127235\") " pod="calico-system/whisker-d4d4f7875-ldjcd" Nov 6 00:31:39.004061 systemd[1]: Created slice kubepods-burstable-pod82f74817_8797_47bc_b585_b333aafdc3bf.slice - libcontainer container kubepods-burstable-pod82f74817_8797_47bc_b585_b333aafdc3bf.slice. Nov 6 00:31:39.023025 systemd[1]: Created slice kubepods-burstable-poda1313b9d_cae7_480b_9dd6_87cba17dd41d.slice - libcontainer container kubepods-burstable-poda1313b9d_cae7_480b_9dd6_87cba17dd41d.slice. Nov 6 00:31:39.033090 kubelet[2801]: E1106 00:31:39.032846 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:39.034890 systemd[1]: Created slice kubepods-besteffort-pod52f5f93c_5f24_4f92_88a3_401da8e7e300.slice - libcontainer container kubepods-besteffort-pod52f5f93c_5f24_4f92_88a3_401da8e7e300.slice. Nov 6 00:31:39.039910 containerd[1616]: time="2025-11-06T00:31:39.039434322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wggz5,Uid:a1313b9d-cae7-480b-9dd6-87cba17dd41d,Namespace:kube-system,Attempt:0,}" Nov 6 00:31:39.050906 containerd[1616]: time="2025-11-06T00:31:39.050856811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589569c468-rpmp8,Uid:52f5f93c-5f24-4f92-88a3-401da8e7e300,Namespace:calico-system,Attempt:0,}" Nov 6 00:31:39.059858 systemd[1]: Created slice kubepods-besteffort-pod4977dfeb_b401_43e8_996c_8b0f6fd603a7.slice - libcontainer container kubepods-besteffort-pod4977dfeb_b401_43e8_996c_8b0f6fd603a7.slice. Nov 6 00:31:39.077315 containerd[1616]: time="2025-11-06T00:31:39.077239313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb775b858-snbnl,Uid:4977dfeb-b401-43e8-996c-8b0f6fd603a7,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:31:39.084028 systemd[1]: Created slice kubepods-besteffort-pod23a4e2d6_7e35_4d28_a47f_d87913358f1f.slice - libcontainer container kubepods-besteffort-pod23a4e2d6_7e35_4d28_a47f_d87913358f1f.slice. Nov 6 00:31:39.096302 containerd[1616]: time="2025-11-06T00:31:39.096232418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb775b858-p9q9w,Uid:23a4e2d6-7e35-4d28-a47f-d87913358f1f,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:31:39.118141 containerd[1616]: time="2025-11-06T00:31:39.117993016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-v9zqm,Uid:e8483869-a3c9-4d7b-858a-1505af0fb5d9,Namespace:calico-system,Attempt:0,}" Nov 6 00:31:39.126175 containerd[1616]: time="2025-11-06T00:31:39.125070203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6977b59f7b-5thqk,Uid:0228b1a2-410c-40ab-86ee-d344f8e34170,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:31:39.156532 containerd[1616]: time="2025-11-06T00:31:39.155331108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d4d4f7875-ldjcd,Uid:3eac792e-30de-470b-8978-364680127235,Namespace:calico-system,Attempt:0,}" Nov 6 00:31:39.317763 kubelet[2801]: E1106 00:31:39.317617 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:39.320656 containerd[1616]: time="2025-11-06T00:31:39.319545199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2tndd,Uid:82f74817-8797-47bc-b585-b333aafdc3bf,Namespace:kube-system,Attempt:0,}" Nov 6 00:31:39.509592 containerd[1616]: time="2025-11-06T00:31:39.509348756Z" level=error msg="Failed to destroy network for sandbox \"f9ff476e427b0de3e1c9c657481e84c2369633ae5341ee0b21f7c51c8a658a24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.515422 systemd[1]: run-netns-cni\x2de83e8d64\x2d5bd1\x2d53c3\x2d2474\x2d334c8c4fb937.mount: Deactivated successfully. Nov 6 00:31:39.520207 containerd[1616]: time="2025-11-06T00:31:39.519999076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-v9zqm,Uid:e8483869-a3c9-4d7b-858a-1505af0fb5d9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9ff476e427b0de3e1c9c657481e84c2369633ae5341ee0b21f7c51c8a658a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.537766 containerd[1616]: time="2025-11-06T00:31:39.537689121Z" level=error msg="Failed to destroy network for sandbox \"469c5426f634fd1f24d762209862a70aedc0135458e848d27f56a94653c48b75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.541766 systemd[1]: run-netns-cni\x2d4d063c5f\x2dfdec\x2da973\x2d191a\x2d261b38fbddb5.mount: Deactivated successfully. Nov 6 00:31:39.543597 containerd[1616]: time="2025-11-06T00:31:39.543535425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wggz5,Uid:a1313b9d-cae7-480b-9dd6-87cba17dd41d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"469c5426f634fd1f24d762209862a70aedc0135458e848d27f56a94653c48b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.547051 kubelet[2801]: E1106 00:31:39.546985 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469c5426f634fd1f24d762209862a70aedc0135458e848d27f56a94653c48b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.547571 kubelet[2801]: E1106 00:31:39.547338 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469c5426f634fd1f24d762209862a70aedc0135458e848d27f56a94653c48b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wggz5" Nov 6 00:31:39.547687 kubelet[2801]: E1106 00:31:39.546752 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9ff476e427b0de3e1c9c657481e84c2369633ae5341ee0b21f7c51c8a658a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.547745 kubelet[2801]: E1106 00:31:39.547600 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469c5426f634fd1f24d762209862a70aedc0135458e848d27f56a94653c48b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wggz5" Nov 6 00:31:39.548234 kubelet[2801]: E1106 00:31:39.547682 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9ff476e427b0de3e1c9c657481e84c2369633ae5341ee0b21f7c51c8a658a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-v9zqm" Nov 6 00:31:39.548314 kubelet[2801]: E1106 00:31:39.548238 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9ff476e427b0de3e1c9c657481e84c2369633ae5341ee0b21f7c51c8a658a24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-v9zqm" Nov 6 00:31:39.548580 kubelet[2801]: E1106 00:31:39.548326 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wggz5_kube-system(a1313b9d-cae7-480b-9dd6-87cba17dd41d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wggz5_kube-system(a1313b9d-cae7-480b-9dd6-87cba17dd41d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"469c5426f634fd1f24d762209862a70aedc0135458e848d27f56a94653c48b75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wggz5" podUID="a1313b9d-cae7-480b-9dd6-87cba17dd41d" Nov 6 00:31:39.548699 kubelet[2801]: E1106 00:31:39.548652 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-v9zqm_calico-system(e8483869-a3c9-4d7b-858a-1505af0fb5d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-v9zqm_calico-system(e8483869-a3c9-4d7b-858a-1505af0fb5d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9ff476e427b0de3e1c9c657481e84c2369633ae5341ee0b21f7c51c8a658a24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-v9zqm" podUID="e8483869-a3c9-4d7b-858a-1505af0fb5d9" Nov 6 00:31:39.552003 kubelet[2801]: E1106 00:31:39.551964 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:39.558311 containerd[1616]: time="2025-11-06T00:31:39.558017328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 6 00:31:39.560136 containerd[1616]: time="2025-11-06T00:31:39.558793935Z" level=error msg="Failed to destroy network for sandbox \"bd47de27a6d43b738a8263cf684e91eafc66cc5a4b1ca55d33fdaf429871fe6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.567154 containerd[1616]: time="2025-11-06T00:31:39.566889132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6977b59f7b-5thqk,Uid:0228b1a2-410c-40ab-86ee-d344f8e34170,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd47de27a6d43b738a8263cf684e91eafc66cc5a4b1ca55d33fdaf429871fe6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.567069 systemd[1]: run-netns-cni\x2d26a0a3bc\x2d4e42\x2d4fdd\x2dde8f\x2d74f39e123a12.mount: Deactivated successfully. Nov 6 00:31:39.569259 kubelet[2801]: E1106 00:31:39.569137 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd47de27a6d43b738a8263cf684e91eafc66cc5a4b1ca55d33fdaf429871fe6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.569259 kubelet[2801]: E1106 00:31:39.569210 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd47de27a6d43b738a8263cf684e91eafc66cc5a4b1ca55d33fdaf429871fe6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" Nov 6 00:31:39.569259 kubelet[2801]: E1106 00:31:39.569241 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd47de27a6d43b738a8263cf684e91eafc66cc5a4b1ca55d33fdaf429871fe6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" Nov 6 00:31:39.569520 kubelet[2801]: E1106 00:31:39.569293 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6977b59f7b-5thqk_calico-apiserver(0228b1a2-410c-40ab-86ee-d344f8e34170)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6977b59f7b-5thqk_calico-apiserver(0228b1a2-410c-40ab-86ee-d344f8e34170)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd47de27a6d43b738a8263cf684e91eafc66cc5a4b1ca55d33fdaf429871fe6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" podUID="0228b1a2-410c-40ab-86ee-d344f8e34170" Nov 6 00:31:39.579470 containerd[1616]: time="2025-11-06T00:31:39.579267453Z" level=error msg="Failed to destroy network for sandbox \"e88e6e4abc97688c9bc8bee19c7b6fc3276c50fc95d959f48efd87eb491034aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.584600 containerd[1616]: time="2025-11-06T00:31:39.584481989Z" level=error msg="Failed to destroy network for sandbox \"b851c3fc5e9ef6238b299b33de2301b8a660a58ea5da67d10642f01e98409030\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.585401 containerd[1616]: time="2025-11-06T00:31:39.585319132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d4d4f7875-ldjcd,Uid:3eac792e-30de-470b-8978-364680127235,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e88e6e4abc97688c9bc8bee19c7b6fc3276c50fc95d959f48efd87eb491034aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.588172 containerd[1616]: time="2025-11-06T00:31:39.587789763Z" level=error msg="Failed to destroy network for sandbox \"168956e14afad369a25ea94ffd45110686026d22693efc85a20335fc5f4fe585\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.592685 kubelet[2801]: E1106 00:31:39.590084 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e88e6e4abc97688c9bc8bee19c7b6fc3276c50fc95d959f48efd87eb491034aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.592685 kubelet[2801]: E1106 00:31:39.590155 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e88e6e4abc97688c9bc8bee19c7b6fc3276c50fc95d959f48efd87eb491034aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d4d4f7875-ldjcd" Nov 6 00:31:39.592685 kubelet[2801]: E1106 00:31:39.590184 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e88e6e4abc97688c9bc8bee19c7b6fc3276c50fc95d959f48efd87eb491034aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d4d4f7875-ldjcd" Nov 6 00:31:39.593471 kubelet[2801]: E1106 00:31:39.590232 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d4d4f7875-ldjcd_calico-system(3eac792e-30de-470b-8978-364680127235)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d4d4f7875-ldjcd_calico-system(3eac792e-30de-470b-8978-364680127235)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e88e6e4abc97688c9bc8bee19c7b6fc3276c50fc95d959f48efd87eb491034aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d4d4f7875-ldjcd" podUID="3eac792e-30de-470b-8978-364680127235" Nov 6 00:31:39.597251 containerd[1616]: time="2025-11-06T00:31:39.596826394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589569c468-rpmp8,Uid:52f5f93c-5f24-4f92-88a3-401da8e7e300,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b851c3fc5e9ef6238b299b33de2301b8a660a58ea5da67d10642f01e98409030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.598370 containerd[1616]: time="2025-11-06T00:31:39.598288541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb775b858-snbnl,Uid:4977dfeb-b401-43e8-996c-8b0f6fd603a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"168956e14afad369a25ea94ffd45110686026d22693efc85a20335fc5f4fe585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.598886 kubelet[2801]: E1106 00:31:39.598836 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168956e14afad369a25ea94ffd45110686026d22693efc85a20335fc5f4fe585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.598886 kubelet[2801]: E1106 00:31:39.598916 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168956e14afad369a25ea94ffd45110686026d22693efc85a20335fc5f4fe585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" Nov 6 00:31:39.598886 kubelet[2801]: E1106 00:31:39.599413 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b851c3fc5e9ef6238b299b33de2301b8a660a58ea5da67d10642f01e98409030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.598886 kubelet[2801]: E1106 00:31:39.599491 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168956e14afad369a25ea94ffd45110686026d22693efc85a20335fc5f4fe585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" Nov 6 00:31:39.601032 kubelet[2801]: E1106 00:31:39.599575 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b851c3fc5e9ef6238b299b33de2301b8a660a58ea5da67d10642f01e98409030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589569c468-rpmp8" Nov 6 00:31:39.601032 kubelet[2801]: E1106 00:31:39.599766 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b851c3fc5e9ef6238b299b33de2301b8a660a58ea5da67d10642f01e98409030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589569c468-rpmp8" Nov 6 00:31:39.601032 kubelet[2801]: E1106 00:31:39.599691 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bb775b858-snbnl_calico-apiserver(4977dfeb-b401-43e8-996c-8b0f6fd603a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bb775b858-snbnl_calico-apiserver(4977dfeb-b401-43e8-996c-8b0f6fd603a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"168956e14afad369a25ea94ffd45110686026d22693efc85a20335fc5f4fe585\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" podUID="4977dfeb-b401-43e8-996c-8b0f6fd603a7" Nov 6 00:31:39.601189 kubelet[2801]: E1106 00:31:39.599980 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-589569c468-rpmp8_calico-system(52f5f93c-5f24-4f92-88a3-401da8e7e300)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-589569c468-rpmp8_calico-system(52f5f93c-5f24-4f92-88a3-401da8e7e300)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b851c3fc5e9ef6238b299b33de2301b8a660a58ea5da67d10642f01e98409030\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589569c468-rpmp8" podUID="52f5f93c-5f24-4f92-88a3-401da8e7e300" Nov 6 00:31:39.628317 containerd[1616]: time="2025-11-06T00:31:39.623853499Z" level=error msg="Failed to destroy network for sandbox \"c9371556d33ef9b9f944d0c766dc245cd3ce4915af2093c80f8e429b45870d59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.636405 containerd[1616]: time="2025-11-06T00:31:39.636307004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb775b858-p9q9w,Uid:23a4e2d6-7e35-4d28-a47f-d87913358f1f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9371556d33ef9b9f944d0c766dc245cd3ce4915af2093c80f8e429b45870d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.637109 kubelet[2801]: E1106 00:31:39.636698 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9371556d33ef9b9f944d0c766dc245cd3ce4915af2093c80f8e429b45870d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.637109 kubelet[2801]: E1106 00:31:39.636779 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9371556d33ef9b9f944d0c766dc245cd3ce4915af2093c80f8e429b45870d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" Nov 6 00:31:39.637109 kubelet[2801]: E1106 00:31:39.636822 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9371556d33ef9b9f944d0c766dc245cd3ce4915af2093c80f8e429b45870d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" Nov 6 00:31:39.637251 kubelet[2801]: E1106 00:31:39.636885 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bb775b858-p9q9w_calico-apiserver(23a4e2d6-7e35-4d28-a47f-d87913358f1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bb775b858-p9q9w_calico-apiserver(23a4e2d6-7e35-4d28-a47f-d87913358f1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9371556d33ef9b9f944d0c766dc245cd3ce4915af2093c80f8e429b45870d59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" podUID="23a4e2d6-7e35-4d28-a47f-d87913358f1f" Nov 6 00:31:39.639801 containerd[1616]: time="2025-11-06T00:31:39.637836020Z" level=error msg="Failed to destroy network for sandbox \"217383cd7bf6a5678c7d8b78efde1d05f46aed920c57c1d7ccda304c45e37045\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.654246 containerd[1616]: time="2025-11-06T00:31:39.654149773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2tndd,Uid:82f74817-8797-47bc-b585-b333aafdc3bf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"217383cd7bf6a5678c7d8b78efde1d05f46aed920c57c1d7ccda304c45e37045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.654540 kubelet[2801]: E1106 00:31:39.654478 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"217383cd7bf6a5678c7d8b78efde1d05f46aed920c57c1d7ccda304c45e37045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:39.654668 kubelet[2801]: E1106 00:31:39.654571 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"217383cd7bf6a5678c7d8b78efde1d05f46aed920c57c1d7ccda304c45e37045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2tndd" Nov 6 00:31:39.654668 kubelet[2801]: E1106 00:31:39.654603 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"217383cd7bf6a5678c7d8b78efde1d05f46aed920c57c1d7ccda304c45e37045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2tndd" Nov 6 00:31:39.654761 kubelet[2801]: E1106 00:31:39.654665 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2tndd_kube-system(82f74817-8797-47bc-b585-b333aafdc3bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2tndd_kube-system(82f74817-8797-47bc-b585-b333aafdc3bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"217383cd7bf6a5678c7d8b78efde1d05f46aed920c57c1d7ccda304c45e37045\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2tndd" podUID="82f74817-8797-47bc-b585-b333aafdc3bf" Nov 6 00:31:39.964625 systemd[1]: Created slice kubepods-besteffort-pod3e3d7027_cc01_4677_b498_d2aaae1cd6f2.slice - libcontainer container kubepods-besteffort-pod3e3d7027_cc01_4677_b498_d2aaae1cd6f2.slice. Nov 6 00:31:39.970556 containerd[1616]: time="2025-11-06T00:31:39.970502680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwlzm,Uid:3e3d7027-cc01-4677-b498-d2aaae1cd6f2,Namespace:calico-system,Attempt:0,}" Nov 6 00:31:40.134168 containerd[1616]: time="2025-11-06T00:31:40.134051984Z" level=error msg="Failed to destroy network for sandbox \"7c358586e62550cdc0b8d766c314cc7992872ab18a99cb0514739d5c577aa5c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:40.158534 containerd[1616]: time="2025-11-06T00:31:40.158221981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwlzm,Uid:3e3d7027-cc01-4677-b498-d2aaae1cd6f2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c358586e62550cdc0b8d766c314cc7992872ab18a99cb0514739d5c577aa5c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:40.158799 kubelet[2801]: E1106 00:31:40.158531 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c358586e62550cdc0b8d766c314cc7992872ab18a99cb0514739d5c577aa5c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:40.158799 kubelet[2801]: E1106 00:31:40.158611 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c358586e62550cdc0b8d766c314cc7992872ab18a99cb0514739d5c577aa5c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwlzm" Nov 6 00:31:40.158799 kubelet[2801]: E1106 00:31:40.158649 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c358586e62550cdc0b8d766c314cc7992872ab18a99cb0514739d5c577aa5c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwlzm" Nov 6 00:31:40.158971 kubelet[2801]: E1106 00:31:40.158703 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xwlzm_calico-system(3e3d7027-cc01-4677-b498-d2aaae1cd6f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xwlzm_calico-system(3e3d7027-cc01-4677-b498-d2aaae1cd6f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c358586e62550cdc0b8d766c314cc7992872ab18a99cb0514739d5c577aa5c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:31:40.352229 systemd[1]: run-netns-cni\x2d5b8e0618\x2dc7c6\x2d713c\x2dbb6a\x2d8a75b4a4cdea.mount: Deactivated successfully. Nov 6 00:31:40.352737 systemd[1]: run-netns-cni\x2d5b229d14\x2dee74\x2d6cbf\x2dcf6c\x2da3ab7b64a046.mount: Deactivated successfully. Nov 6 00:31:40.352830 systemd[1]: run-netns-cni\x2d2fbbe05c\x2def5a\x2d74c0\x2d6100\x2dae02e2362a33.mount: Deactivated successfully. Nov 6 00:31:40.352908 systemd[1]: run-netns-cni\x2d97f6ea67\x2dadda\x2d5802\x2d2877\x2d2209897a4f20.mount: Deactivated successfully. Nov 6 00:31:40.353012 systemd[1]: run-netns-cni\x2ded59e151\x2d45c4\x2d4d4c\x2d33a7\x2d268c793985b9.mount: Deactivated successfully. Nov 6 00:31:49.973317 containerd[1616]: time="2025-11-06T00:31:49.973258694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb775b858-snbnl,Uid:4977dfeb-b401-43e8-996c-8b0f6fd603a7,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:31:50.144773 containerd[1616]: time="2025-11-06T00:31:50.144573553Z" level=error msg="Failed to destroy network for sandbox \"75a8a5c8eeffa34e6fba0c15eb31860ef0d998ee6157ae1348cd0e808e57f7ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:50.150731 systemd[1]: run-netns-cni\x2d1b5b6a57\x2d9921\x2d556f\x2dab1c\x2d34c7a63c5116.mount: Deactivated successfully. Nov 6 00:31:50.160257 containerd[1616]: time="2025-11-06T00:31:50.160176890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb775b858-snbnl,Uid:4977dfeb-b401-43e8-996c-8b0f6fd603a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"75a8a5c8eeffa34e6fba0c15eb31860ef0d998ee6157ae1348cd0e808e57f7ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:50.161733 kubelet[2801]: E1106 00:31:50.160804 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75a8a5c8eeffa34e6fba0c15eb31860ef0d998ee6157ae1348cd0e808e57f7ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:50.161733 kubelet[2801]: E1106 00:31:50.160892 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75a8a5c8eeffa34e6fba0c15eb31860ef0d998ee6157ae1348cd0e808e57f7ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" Nov 6 00:31:50.161733 kubelet[2801]: E1106 00:31:50.160920 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75a8a5c8eeffa34e6fba0c15eb31860ef0d998ee6157ae1348cd0e808e57f7ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" Nov 6 00:31:50.163273 kubelet[2801]: E1106 00:31:50.161016 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bb775b858-snbnl_calico-apiserver(4977dfeb-b401-43e8-996c-8b0f6fd603a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bb775b858-snbnl_calico-apiserver(4977dfeb-b401-43e8-996c-8b0f6fd603a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75a8a5c8eeffa34e6fba0c15eb31860ef0d998ee6157ae1348cd0e808e57f7ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" podUID="4977dfeb-b401-43e8-996c-8b0f6fd603a7" Nov 6 00:31:51.963754 kubelet[2801]: E1106 00:31:51.958323 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:51.963754 kubelet[2801]: E1106 00:31:51.960297 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:51.964323 containerd[1616]: time="2025-11-06T00:31:51.962478139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-v9zqm,Uid:e8483869-a3c9-4d7b-858a-1505af0fb5d9,Namespace:calico-system,Attempt:0,}" Nov 6 00:31:51.964323 containerd[1616]: time="2025-11-06T00:31:51.962573462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2tndd,Uid:82f74817-8797-47bc-b585-b333aafdc3bf,Namespace:kube-system,Attempt:0,}" Nov 6 00:31:51.964323 containerd[1616]: time="2025-11-06T00:31:51.962600874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wggz5,Uid:a1313b9d-cae7-480b-9dd6-87cba17dd41d,Namespace:kube-system,Attempt:0,}" Nov 6 00:31:51.964323 containerd[1616]: time="2025-11-06T00:31:51.962516673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwlzm,Uid:3e3d7027-cc01-4677-b498-d2aaae1cd6f2,Namespace:calico-system,Attempt:0,}" Nov 6 00:31:51.964323 containerd[1616]: time="2025-11-06T00:31:51.962517174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb775b858-p9q9w,Uid:23a4e2d6-7e35-4d28-a47f-d87913358f1f,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:31:52.144267 containerd[1616]: time="2025-11-06T00:31:52.137121574Z" level=error msg="Failed to destroy network for sandbox \"fd1bd43393b9ddefb141384db8176bb6fa49aa01dca6e7c8b6ae71ae60849154\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.144829 containerd[1616]: time="2025-11-06T00:31:52.143635580Z" level=error msg="Failed to destroy network for sandbox \"a15c6577dd61840659a9e423cfe933393cc6817bb278895f272bc93bb11e75f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.248102 containerd[1616]: time="2025-11-06T00:31:52.242965448Z" level=error msg="Failed to destroy network for sandbox \"774fdc9fcc9dd34541ce2b6dba3b8632ae9f081566603bb62c550060d54f2fa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.267476 containerd[1616]: time="2025-11-06T00:31:52.267399313Z" level=error msg="Failed to destroy network for sandbox \"d865b483c4713def99bfcf90d06cb17703084ed9577bf5cf4fa893c876b152ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.331729 containerd[1616]: time="2025-11-06T00:31:52.331639308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb775b858-p9q9w,Uid:23a4e2d6-7e35-4d28-a47f-d87913358f1f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd1bd43393b9ddefb141384db8176bb6fa49aa01dca6e7c8b6ae71ae60849154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.332455 kubelet[2801]: E1106 00:31:52.332408 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd1bd43393b9ddefb141384db8176bb6fa49aa01dca6e7c8b6ae71ae60849154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.332803 kubelet[2801]: E1106 00:31:52.332745 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd1bd43393b9ddefb141384db8176bb6fa49aa01dca6e7c8b6ae71ae60849154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" Nov 6 00:31:52.332968 kubelet[2801]: E1106 00:31:52.332915 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd1bd43393b9ddefb141384db8176bb6fa49aa01dca6e7c8b6ae71ae60849154\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" Nov 6 00:31:52.336090 kubelet[2801]: E1106 00:31:52.336009 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bb775b858-p9q9w_calico-apiserver(23a4e2d6-7e35-4d28-a47f-d87913358f1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bb775b858-p9q9w_calico-apiserver(23a4e2d6-7e35-4d28-a47f-d87913358f1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd1bd43393b9ddefb141384db8176bb6fa49aa01dca6e7c8b6ae71ae60849154\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" podUID="23a4e2d6-7e35-4d28-a47f-d87913358f1f" Nov 6 00:31:52.342403 containerd[1616]: time="2025-11-06T00:31:52.340925998Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wggz5,Uid:a1313b9d-cae7-480b-9dd6-87cba17dd41d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15c6577dd61840659a9e423cfe933393cc6817bb278895f272bc93bb11e75f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.342602 kubelet[2801]: E1106 00:31:52.341561 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15c6577dd61840659a9e423cfe933393cc6817bb278895f272bc93bb11e75f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.342602 kubelet[2801]: E1106 00:31:52.341633 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15c6577dd61840659a9e423cfe933393cc6817bb278895f272bc93bb11e75f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wggz5" Nov 6 00:31:52.342602 kubelet[2801]: E1106 00:31:52.341662 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15c6577dd61840659a9e423cfe933393cc6817bb278895f272bc93bb11e75f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wggz5" Nov 6 00:31:52.342778 kubelet[2801]: E1106 00:31:52.341734 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wggz5_kube-system(a1313b9d-cae7-480b-9dd6-87cba17dd41d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wggz5_kube-system(a1313b9d-cae7-480b-9dd6-87cba17dd41d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a15c6577dd61840659a9e423cfe933393cc6817bb278895f272bc93bb11e75f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wggz5" podUID="a1313b9d-cae7-480b-9dd6-87cba17dd41d" Nov 6 00:31:52.352829 containerd[1616]: time="2025-11-06T00:31:52.350464198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-v9zqm,Uid:e8483869-a3c9-4d7b-858a-1505af0fb5d9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"774fdc9fcc9dd34541ce2b6dba3b8632ae9f081566603bb62c550060d54f2fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.354225 kubelet[2801]: E1106 00:31:52.353905 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"774fdc9fcc9dd34541ce2b6dba3b8632ae9f081566603bb62c550060d54f2fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.354225 kubelet[2801]: E1106 00:31:52.354012 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"774fdc9fcc9dd34541ce2b6dba3b8632ae9f081566603bb62c550060d54f2fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-v9zqm" Nov 6 00:31:52.354225 kubelet[2801]: E1106 00:31:52.354041 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"774fdc9fcc9dd34541ce2b6dba3b8632ae9f081566603bb62c550060d54f2fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-v9zqm" Nov 6 00:31:52.354619 kubelet[2801]: E1106 00:31:52.354095 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-v9zqm_calico-system(e8483869-a3c9-4d7b-858a-1505af0fb5d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-v9zqm_calico-system(e8483869-a3c9-4d7b-858a-1505af0fb5d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"774fdc9fcc9dd34541ce2b6dba3b8632ae9f081566603bb62c550060d54f2fa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-v9zqm" podUID="e8483869-a3c9-4d7b-858a-1505af0fb5d9" Nov 6 00:31:52.361382 containerd[1616]: time="2025-11-06T00:31:52.360666437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwlzm,Uid:3e3d7027-cc01-4677-b498-d2aaae1cd6f2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d865b483c4713def99bfcf90d06cb17703084ed9577bf5cf4fa893c876b152ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.361623 kubelet[2801]: E1106 00:31:52.361030 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d865b483c4713def99bfcf90d06cb17703084ed9577bf5cf4fa893c876b152ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.361623 kubelet[2801]: E1106 00:31:52.361117 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d865b483c4713def99bfcf90d06cb17703084ed9577bf5cf4fa893c876b152ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwlzm" Nov 6 00:31:52.361623 kubelet[2801]: E1106 00:31:52.361153 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d865b483c4713def99bfcf90d06cb17703084ed9577bf5cf4fa893c876b152ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwlzm" Nov 6 00:31:52.361803 kubelet[2801]: E1106 00:31:52.361224 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xwlzm_calico-system(3e3d7027-cc01-4677-b498-d2aaae1cd6f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xwlzm_calico-system(3e3d7027-cc01-4677-b498-d2aaae1cd6f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d865b483c4713def99bfcf90d06cb17703084ed9577bf5cf4fa893c876b152ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:31:52.438070 containerd[1616]: time="2025-11-06T00:31:52.437974452Z" level=error msg="Failed to destroy network for sandbox \"cb6020b5e4795169ef61c7f23580c081e02e621018f331095866ac0d93eeaf76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.446134 containerd[1616]: time="2025-11-06T00:31:52.446039184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2tndd,Uid:82f74817-8797-47bc-b585-b333aafdc3bf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb6020b5e4795169ef61c7f23580c081e02e621018f331095866ac0d93eeaf76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.446459 kubelet[2801]: E1106 00:31:52.446383 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb6020b5e4795169ef61c7f23580c081e02e621018f331095866ac0d93eeaf76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:52.446565 kubelet[2801]: E1106 00:31:52.446471 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb6020b5e4795169ef61c7f23580c081e02e621018f331095866ac0d93eeaf76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2tndd" Nov 6 00:31:52.446565 kubelet[2801]: E1106 00:31:52.446500 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb6020b5e4795169ef61c7f23580c081e02e621018f331095866ac0d93eeaf76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2tndd" Nov 6 00:31:52.446717 kubelet[2801]: E1106 00:31:52.446562 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2tndd_kube-system(82f74817-8797-47bc-b585-b333aafdc3bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2tndd_kube-system(82f74817-8797-47bc-b585-b333aafdc3bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb6020b5e4795169ef61c7f23580c081e02e621018f331095866ac0d93eeaf76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2tndd" podUID="82f74817-8797-47bc-b585-b333aafdc3bf" Nov 6 00:31:52.975911 systemd[1]: run-netns-cni\x2d88b1c235\x2d1b95\x2d6f2d\x2d63f3\x2dc59da214a209.mount: Deactivated successfully. Nov 6 00:31:52.976614 systemd[1]: run-netns-cni\x2dbeb24943\x2d56bc\x2d83c1\x2d9662\x2d49d3cb973f10.mount: Deactivated successfully. Nov 6 00:31:52.976836 systemd[1]: run-netns-cni\x2d33cf5119\x2db429\x2d87d3\x2df124\x2d3f90a14970dd.mount: Deactivated successfully. Nov 6 00:31:52.977073 systemd[1]: run-netns-cni\x2d3df212d5\x2d1e70\x2da4c3\x2d096c\x2d358741561e69.mount: Deactivated successfully. Nov 6 00:31:52.977357 systemd[1]: run-netns-cni\x2d3d74c235\x2d1b14\x2d2d6d\x2df343\x2da112a054de11.mount: Deactivated successfully. Nov 6 00:31:52.978484 containerd[1616]: time="2025-11-06T00:31:52.978358774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d4d4f7875-ldjcd,Uid:3eac792e-30de-470b-8978-364680127235,Namespace:calico-system,Attempt:0,}" Nov 6 00:31:52.981549 containerd[1616]: time="2025-11-06T00:31:52.981436740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6977b59f7b-5thqk,Uid:0228b1a2-410c-40ab-86ee-d344f8e34170,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:31:53.142276 containerd[1616]: time="2025-11-06T00:31:53.138946292Z" level=error msg="Failed to destroy network for sandbox \"9af83ac496e905e148aee4c4dace3c00e541b42442d6acc1b2a147811970f99e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:53.154097 systemd[1]: run-netns-cni\x2d4f537c2c\x2db7f5\x2dee66\x2d2d7a\x2d9ba2bf314ccc.mount: Deactivated successfully. Nov 6 00:31:53.192910 containerd[1616]: time="2025-11-06T00:31:53.192664800Z" level=error msg="Failed to destroy network for sandbox \"fb42d3b7030e28c95d0c3066021e0bfed191e73a37647ce97cc2ac759b233474\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:53.197775 systemd[1]: run-netns-cni\x2dbd7cd39e\x2d17f4\x2d5fa9\x2d611c\x2dbf1c34299e45.mount: Deactivated successfully. Nov 6 00:31:53.273179 containerd[1616]: time="2025-11-06T00:31:53.272460893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6977b59f7b-5thqk,Uid:0228b1a2-410c-40ab-86ee-d344f8e34170,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af83ac496e905e148aee4c4dace3c00e541b42442d6acc1b2a147811970f99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:53.273373 kubelet[2801]: E1106 00:31:53.273169 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af83ac496e905e148aee4c4dace3c00e541b42442d6acc1b2a147811970f99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:53.273373 kubelet[2801]: E1106 00:31:53.273270 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af83ac496e905e148aee4c4dace3c00e541b42442d6acc1b2a147811970f99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" Nov 6 00:31:53.273373 kubelet[2801]: E1106 00:31:53.273301 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af83ac496e905e148aee4c4dace3c00e541b42442d6acc1b2a147811970f99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" Nov 6 00:31:53.273981 kubelet[2801]: E1106 00:31:53.273381 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6977b59f7b-5thqk_calico-apiserver(0228b1a2-410c-40ab-86ee-d344f8e34170)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6977b59f7b-5thqk_calico-apiserver(0228b1a2-410c-40ab-86ee-d344f8e34170)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9af83ac496e905e148aee4c4dace3c00e541b42442d6acc1b2a147811970f99e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" podUID="0228b1a2-410c-40ab-86ee-d344f8e34170" Nov 6 00:31:53.388784 containerd[1616]: time="2025-11-06T00:31:53.388688161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d4d4f7875-ldjcd,Uid:3eac792e-30de-470b-8978-364680127235,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb42d3b7030e28c95d0c3066021e0bfed191e73a37647ce97cc2ac759b233474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:53.389152 kubelet[2801]: E1106 00:31:53.389098 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb42d3b7030e28c95d0c3066021e0bfed191e73a37647ce97cc2ac759b233474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:53.389245 kubelet[2801]: E1106 00:31:53.389180 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb42d3b7030e28c95d0c3066021e0bfed191e73a37647ce97cc2ac759b233474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d4d4f7875-ldjcd" Nov 6 00:31:53.389290 kubelet[2801]: E1106 00:31:53.389224 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb42d3b7030e28c95d0c3066021e0bfed191e73a37647ce97cc2ac759b233474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d4d4f7875-ldjcd" Nov 6 00:31:53.389347 kubelet[2801]: E1106 00:31:53.389309 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d4d4f7875-ldjcd_calico-system(3eac792e-30de-470b-8978-364680127235)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d4d4f7875-ldjcd_calico-system(3eac792e-30de-470b-8978-364680127235)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb42d3b7030e28c95d0c3066021e0bfed191e73a37647ce97cc2ac759b233474\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d4d4f7875-ldjcd" podUID="3eac792e-30de-470b-8978-364680127235" Nov 6 00:31:53.952973 containerd[1616]: time="2025-11-06T00:31:53.952869341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589569c468-rpmp8,Uid:52f5f93c-5f24-4f92-88a3-401da8e7e300,Namespace:calico-system,Attempt:0,}" Nov 6 00:31:54.801904 containerd[1616]: time="2025-11-06T00:31:54.801552664Z" level=error msg="Failed to destroy network for sandbox \"c27714c9fbed9542c23ba1ea870381b4b0a3b1cf0ab48849fcff9e4616cdbdd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:54.806222 systemd[1]: run-netns-cni\x2d7ef5b396\x2de89d\x2d5de6\x2db794\x2dce30b2d201aa.mount: Deactivated successfully. Nov 6 00:31:54.937913 containerd[1616]: time="2025-11-06T00:31:54.937808216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589569c468-rpmp8,Uid:52f5f93c-5f24-4f92-88a3-401da8e7e300,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27714c9fbed9542c23ba1ea870381b4b0a3b1cf0ab48849fcff9e4616cdbdd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:54.938224 kubelet[2801]: E1106 00:31:54.938159 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27714c9fbed9542c23ba1ea870381b4b0a3b1cf0ab48849fcff9e4616cdbdd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:31:54.938726 kubelet[2801]: E1106 00:31:54.938250 2801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27714c9fbed9542c23ba1ea870381b4b0a3b1cf0ab48849fcff9e4616cdbdd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589569c468-rpmp8" Nov 6 00:31:54.938726 kubelet[2801]: E1106 00:31:54.938282 2801 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27714c9fbed9542c23ba1ea870381b4b0a3b1cf0ab48849fcff9e4616cdbdd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589569c468-rpmp8" Nov 6 00:31:54.938726 kubelet[2801]: E1106 00:31:54.938343 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-589569c468-rpmp8_calico-system(52f5f93c-5f24-4f92-88a3-401da8e7e300)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-589569c468-rpmp8_calico-system(52f5f93c-5f24-4f92-88a3-401da8e7e300)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c27714c9fbed9542c23ba1ea870381b4b0a3b1cf0ab48849fcff9e4616cdbdd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589569c468-rpmp8" podUID="52f5f93c-5f24-4f92-88a3-401da8e7e300" Nov 6 00:31:56.188542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3896668172.mount: Deactivated successfully. Nov 6 00:31:57.212396 containerd[1616]: time="2025-11-06T00:31:57.212125996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:31:57.215872 containerd[1616]: time="2025-11-06T00:31:57.215587542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 6 00:31:57.219740 containerd[1616]: time="2025-11-06T00:31:57.218248198Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:31:57.226248 containerd[1616]: time="2025-11-06T00:31:57.226162670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:31:57.227498 containerd[1616]: time="2025-11-06T00:31:57.227443584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 17.669376671s" Nov 6 00:31:57.241739 containerd[1616]: time="2025-11-06T00:31:57.227491486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 6 00:31:57.277023 containerd[1616]: time="2025-11-06T00:31:57.276957644Z" level=info msg="CreateContainer within sandbox \"c26698725bc81f10646d1bbf5570895792ff15b35f2c56ca64e9713eaa073f63\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 6 00:31:57.311255 containerd[1616]: time="2025-11-06T00:31:57.309782268Z" level=info msg="Container f74d929ed53749c7d83123cb12c3bd58dafcd96f2b0deef6eb7b25ac9c2c82c8: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:31:57.362197 containerd[1616]: time="2025-11-06T00:31:57.361995036Z" level=info msg="CreateContainer within sandbox \"c26698725bc81f10646d1bbf5570895792ff15b35f2c56ca64e9713eaa073f63\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f74d929ed53749c7d83123cb12c3bd58dafcd96f2b0deef6eb7b25ac9c2c82c8\"" Nov 6 00:31:57.364809 containerd[1616]: time="2025-11-06T00:31:57.363136645Z" level=info msg="StartContainer for \"f74d929ed53749c7d83123cb12c3bd58dafcd96f2b0deef6eb7b25ac9c2c82c8\"" Nov 6 00:31:57.366574 containerd[1616]: time="2025-11-06T00:31:57.366504041Z" level=info msg="connecting to shim f74d929ed53749c7d83123cb12c3bd58dafcd96f2b0deef6eb7b25ac9c2c82c8" address="unix:///run/containerd/s/56d0b88b8a6461a5a7b0f8a5720fe767eb924cc5e3ed2e70e7779970ec47387f" protocol=ttrpc version=3 Nov 6 00:31:57.489294 systemd[1]: Started cri-containerd-f74d929ed53749c7d83123cb12c3bd58dafcd96f2b0deef6eb7b25ac9c2c82c8.scope - libcontainer container f74d929ed53749c7d83123cb12c3bd58dafcd96f2b0deef6eb7b25ac9c2c82c8. Nov 6 00:31:57.627811 containerd[1616]: time="2025-11-06T00:31:57.627461261Z" level=info msg="StartContainer for \"f74d929ed53749c7d83123cb12c3bd58dafcd96f2b0deef6eb7b25ac9c2c82c8\" returns successfully" Nov 6 00:31:57.692584 kubelet[2801]: E1106 00:31:57.692537 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:57.767288 kubelet[2801]: I1106 00:31:57.764468 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lh9tf" podStartSLOduration=1.105926137 podStartE2EDuration="41.764447871s" podCreationTimestamp="2025-11-06 00:31:16 +0000 UTC" firstStartedPulling="2025-11-06 00:31:16.584778838 +0000 UTC m=+30.120907337" lastFinishedPulling="2025-11-06 00:31:57.243300572 +0000 UTC m=+70.779429071" observedRunningTime="2025-11-06 00:31:57.757819242 +0000 UTC m=+71.293947741" watchObservedRunningTime="2025-11-06 00:31:57.764447871 +0000 UTC m=+71.300576370" Nov 6 00:31:57.861158 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 6 00:31:57.874727 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 6 00:31:58.040264 containerd[1616]: time="2025-11-06T00:31:58.039813166Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f74d929ed53749c7d83123cb12c3bd58dafcd96f2b0deef6eb7b25ac9c2c82c8\" id:\"1a08e406399f3e045cbaed655724410a8eb4279fd1395cac7f78b60d0a224b6b\" pid:4265 exit_status:1 exited_at:{seconds:1762389118 nanos:39110305}" Nov 6 00:31:58.189322 kubelet[2801]: I1106 00:31:58.187211 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3eac792e-30de-470b-8978-364680127235-whisker-ca-bundle\") pod \"3eac792e-30de-470b-8978-364680127235\" (UID: \"3eac792e-30de-470b-8978-364680127235\") " Nov 6 00:31:58.189322 kubelet[2801]: I1106 00:31:58.188695 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3eac792e-30de-470b-8978-364680127235-whisker-backend-key-pair\") pod \"3eac792e-30de-470b-8978-364680127235\" (UID: \"3eac792e-30de-470b-8978-364680127235\") " Nov 6 00:31:58.189322 kubelet[2801]: I1106 00:31:58.188741 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjznn\" (UniqueName: \"kubernetes.io/projected/3eac792e-30de-470b-8978-364680127235-kube-api-access-rjznn\") pod \"3eac792e-30de-470b-8978-364680127235\" (UID: \"3eac792e-30de-470b-8978-364680127235\") " Nov 6 00:31:58.707576 kubelet[2801]: E1106 00:31:58.704488 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:31:58.896535 containerd[1616]: time="2025-11-06T00:31:58.896477750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f74d929ed53749c7d83123cb12c3bd58dafcd96f2b0deef6eb7b25ac9c2c82c8\" id:\"33ea8a0c6c4b172be06ca988eb09ddbd15668a721719b036f95a28f30595659a\" pid:4301 exit_status:1 exited_at:{seconds:1762389118 nanos:896067849}" Nov 6 00:31:58.967631 kubelet[2801]: I1106 00:31:58.967030 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eac792e-30de-470b-8978-364680127235-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3eac792e-30de-470b-8978-364680127235" (UID: "3eac792e-30de-470b-8978-364680127235"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:31:58.989399 kubelet[2801]: I1106 00:31:58.989243 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eac792e-30de-470b-8978-364680127235-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3eac792e-30de-470b-8978-364680127235" (UID: "3eac792e-30de-470b-8978-364680127235"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:31:58.989905 systemd[1]: var-lib-kubelet-pods-3eac792e\x2d30de\x2d470b\x2d8978\x2d364680127235-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drjznn.mount: Deactivated successfully. Nov 6 00:31:58.990094 systemd[1]: var-lib-kubelet-pods-3eac792e\x2d30de\x2d470b\x2d8978\x2d364680127235-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 6 00:31:59.002067 kubelet[2801]: I1106 00:31:58.996149 2801 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3eac792e-30de-470b-8978-364680127235-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 6 00:31:59.002067 kubelet[2801]: I1106 00:31:58.996186 2801 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3eac792e-30de-470b-8978-364680127235-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 6 00:31:59.002067 kubelet[2801]: I1106 00:31:58.996391 2801 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eac792e-30de-470b-8978-364680127235-kube-api-access-rjznn" (OuterVolumeSpecName: "kube-api-access-rjznn") pod "3eac792e-30de-470b-8978-364680127235" (UID: "3eac792e-30de-470b-8978-364680127235"). InnerVolumeSpecName "kube-api-access-rjznn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:31:59.013115 systemd[1]: Removed slice kubepods-besteffort-pod3eac792e_30de_470b_8978_364680127235.slice - libcontainer container kubepods-besteffort-pod3eac792e_30de_470b_8978_364680127235.slice. Nov 6 00:31:59.097361 kubelet[2801]: I1106 00:31:59.097275 2801 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rjznn\" (UniqueName: \"kubernetes.io/projected/3eac792e-30de-470b-8978-364680127235-kube-api-access-rjznn\") on node \"localhost\" DevicePath \"\"" Nov 6 00:31:59.642613 systemd[1]: Created slice kubepods-besteffort-podd8a1eb22_1c9e_4de5_b2e4_2283cdcf5397.slice - libcontainer container kubepods-besteffort-podd8a1eb22_1c9e_4de5_b2e4_2283cdcf5397.slice. Nov 6 00:31:59.702625 kubelet[2801]: I1106 00:31:59.702480 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397-whisker-ca-bundle\") pod \"whisker-786c755bc5-fhq72\" (UID: \"d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397\") " pod="calico-system/whisker-786c755bc5-fhq72" Nov 6 00:31:59.702625 kubelet[2801]: I1106 00:31:59.702554 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25tgf\" (UniqueName: \"kubernetes.io/projected/d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397-kube-api-access-25tgf\") pod \"whisker-786c755bc5-fhq72\" (UID: \"d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397\") " pod="calico-system/whisker-786c755bc5-fhq72" Nov 6 00:31:59.702625 kubelet[2801]: I1106 00:31:59.702598 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397-whisker-backend-key-pair\") pod \"whisker-786c755bc5-fhq72\" (UID: \"d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397\") " pod="calico-system/whisker-786c755bc5-fhq72" Nov 6 00:31:59.955978 containerd[1616]: time="2025-11-06T00:31:59.955898529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-786c755bc5-fhq72,Uid:d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397,Namespace:calico-system,Attempt:0,}" Nov 6 00:32:00.753221 systemd-networkd[1508]: cali0e4a19e58fe: Link UP Nov 6 00:32:00.753798 systemd-networkd[1508]: cali0e4a19e58fe: Gained carrier Nov 6 00:32:00.842703 containerd[1616]: 2025-11-06 00:32:00.290 [INFO][4328] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:32:00.842703 containerd[1616]: 2025-11-06 00:32:00.363 [INFO][4328] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--786c755bc5--fhq72-eth0 whisker-786c755bc5- calico-system d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397 1053 0 2025-11-06 00:31:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:786c755bc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-786c755bc5-fhq72 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0e4a19e58fe [] [] }} ContainerID="29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" Namespace="calico-system" Pod="whisker-786c755bc5-fhq72" WorkloadEndpoint="localhost-k8s-whisker--786c755bc5--fhq72-" Nov 6 00:32:00.842703 containerd[1616]: 2025-11-06 00:32:00.364 [INFO][4328] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" Namespace="calico-system" Pod="whisker-786c755bc5-fhq72" WorkloadEndpoint="localhost-k8s-whisker--786c755bc5--fhq72-eth0" Nov 6 00:32:00.842703 containerd[1616]: 2025-11-06 00:32:00.565 [INFO][4344] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" HandleID="k8s-pod-network.29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" Workload="localhost-k8s-whisker--786c755bc5--fhq72-eth0" Nov 6 00:32:00.843332 containerd[1616]: 2025-11-06 00:32:00.567 [INFO][4344] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" HandleID="k8s-pod-network.29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" Workload="localhost-k8s-whisker--786c755bc5--fhq72-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e1d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-786c755bc5-fhq72", "timestamp":"2025-11-06 00:32:00.565772053 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:32:00.843332 containerd[1616]: 2025-11-06 00:32:00.567 [INFO][4344] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:32:00.843332 containerd[1616]: 2025-11-06 00:32:00.567 [INFO][4344] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:32:00.843332 containerd[1616]: 2025-11-06 00:32:00.568 [INFO][4344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:32:00.843332 containerd[1616]: 2025-11-06 00:32:00.586 [INFO][4344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" host="localhost" Nov 6 00:32:00.843332 containerd[1616]: 2025-11-06 00:32:00.606 [INFO][4344] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:32:00.843332 containerd[1616]: 2025-11-06 00:32:00.624 [INFO][4344] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:32:00.843332 containerd[1616]: 2025-11-06 00:32:00.641 [INFO][4344] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:00.843332 containerd[1616]: 2025-11-06 00:32:00.646 [INFO][4344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:00.843332 containerd[1616]: 2025-11-06 00:32:00.646 [INFO][4344] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" host="localhost" Nov 6 00:32:00.844529 containerd[1616]: 2025-11-06 00:32:00.651 [INFO][4344] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da Nov 6 00:32:00.844529 containerd[1616]: 2025-11-06 00:32:00.698 [INFO][4344] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" host="localhost" Nov 6 00:32:00.844529 containerd[1616]: 2025-11-06 00:32:00.726 [INFO][4344] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" host="localhost" Nov 6 00:32:00.844529 containerd[1616]: 2025-11-06 00:32:00.726 [INFO][4344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" host="localhost" Nov 6 00:32:00.844529 containerd[1616]: 2025-11-06 00:32:00.726 [INFO][4344] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:32:00.844529 containerd[1616]: 2025-11-06 00:32:00.726 [INFO][4344] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" HandleID="k8s-pod-network.29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" Workload="localhost-k8s-whisker--786c755bc5--fhq72-eth0" Nov 6 00:32:00.844714 containerd[1616]: 2025-11-06 00:32:00.736 [INFO][4328] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" Namespace="calico-system" Pod="whisker-786c755bc5-fhq72" WorkloadEndpoint="localhost-k8s-whisker--786c755bc5--fhq72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--786c755bc5--fhq72-eth0", GenerateName:"whisker-786c755bc5-", Namespace:"calico-system", SelfLink:"", UID:"d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"786c755bc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-786c755bc5-fhq72", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0e4a19e58fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:00.844714 containerd[1616]: 2025-11-06 00:32:00.737 [INFO][4328] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" Namespace="calico-system" Pod="whisker-786c755bc5-fhq72" WorkloadEndpoint="localhost-k8s-whisker--786c755bc5--fhq72-eth0" Nov 6 00:32:00.844857 containerd[1616]: 2025-11-06 00:32:00.737 [INFO][4328] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e4a19e58fe ContainerID="29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" Namespace="calico-system" Pod="whisker-786c755bc5-fhq72" WorkloadEndpoint="localhost-k8s-whisker--786c755bc5--fhq72-eth0" Nov 6 00:32:00.844857 containerd[1616]: 2025-11-06 00:32:00.756 [INFO][4328] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" Namespace="calico-system" Pod="whisker-786c755bc5-fhq72" WorkloadEndpoint="localhost-k8s-whisker--786c755bc5--fhq72-eth0" Nov 6 00:32:00.844924 containerd[1616]: 2025-11-06 00:32:00.759 [INFO][4328] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" Namespace="calico-system" Pod="whisker-786c755bc5-fhq72" WorkloadEndpoint="localhost-k8s-whisker--786c755bc5--fhq72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--786c755bc5--fhq72-eth0", GenerateName:"whisker-786c755bc5-", Namespace:"calico-system", SelfLink:"", UID:"d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"786c755bc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da", Pod:"whisker-786c755bc5-fhq72", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0e4a19e58fe", MAC:"5e:7f:a0:fb:40:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:00.845038 containerd[1616]: 2025-11-06 00:32:00.836 [INFO][4328] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" Namespace="calico-system" Pod="whisker-786c755bc5-fhq72" WorkloadEndpoint="localhost-k8s-whisker--786c755bc5--fhq72-eth0" Nov 6 00:32:00.962455 kubelet[2801]: I1106 00:32:00.962382 2801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eac792e-30de-470b-8978-364680127235" path="/var/lib/kubelet/pods/3eac792e-30de-470b-8978-364680127235/volumes" Nov 6 00:32:01.129144 containerd[1616]: time="2025-11-06T00:32:01.128067016Z" level=info msg="connecting to shim 29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da" address="unix:///run/containerd/s/f89df36b7b894f1e696ca81d04715257f2965ecda61ed43b07476e7cadf70240" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:32:01.193265 systemd[1]: Started cri-containerd-29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da.scope - libcontainer container 29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da. Nov 6 00:32:01.232795 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:32:01.322920 containerd[1616]: time="2025-11-06T00:32:01.320457488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-786c755bc5-fhq72,Uid:d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397,Namespace:calico-system,Attempt:0,} returns sandbox id \"29d9d540761ef83b6e36fd7b23da01a8bd4515f17b131c8b728c368009c560da\"" Nov 6 00:32:01.322920 containerd[1616]: time="2025-11-06T00:32:01.322523224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:32:01.588414 systemd-networkd[1508]: vxlan.calico: Link UP Nov 6 00:32:01.588431 systemd-networkd[1508]: vxlan.calico: Gained carrier Nov 6 00:32:01.660062 containerd[1616]: time="2025-11-06T00:32:01.660004902Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:01.672285 containerd[1616]: time="2025-11-06T00:32:01.672228811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:32:01.682162 containerd[1616]: time="2025-11-06T00:32:01.682082696Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:32:01.682656 kubelet[2801]: E1106 00:32:01.682384 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:32:01.685546 kubelet[2801]: E1106 00:32:01.685416 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:32:01.689245 kubelet[2801]: E1106 00:32:01.689056 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c9e2f274426d4fdcb37983441f1257fa,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25tgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786c755bc5-fhq72_calico-system(d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:01.695584 containerd[1616]: time="2025-11-06T00:32:01.695055883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:32:01.809537 systemd-networkd[1508]: cali0e4a19e58fe: Gained IPv6LL Nov 6 00:32:02.051748 containerd[1616]: time="2025-11-06T00:32:02.051305866Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:02.090464 containerd[1616]: time="2025-11-06T00:32:02.085423315Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:32:02.090464 containerd[1616]: time="2025-11-06T00:32:02.085576197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:32:02.090683 kubelet[2801]: E1106 00:32:02.085788 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:32:02.090683 kubelet[2801]: E1106 00:32:02.085853 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:32:02.091172 kubelet[2801]: E1106 00:32:02.086027 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25tgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786c755bc5-fhq72_calico-system(d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:02.091172 kubelet[2801]: E1106 00:32:02.088144 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786c755bc5-fhq72" podUID="d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397" Nov 6 00:32:02.746683 kubelet[2801]: E1106 00:32:02.746593 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786c755bc5-fhq72" podUID="d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397" Nov 6 00:32:02.952863 containerd[1616]: time="2025-11-06T00:32:02.952795431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb775b858-snbnl,Uid:4977dfeb-b401-43e8-996c-8b0f6fd603a7,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:32:03.432155 systemd-networkd[1508]: cali92174ba84b4: Link UP Nov 6 00:32:03.433794 systemd-networkd[1508]: cali92174ba84b4: Gained carrier Nov 6 00:32:03.541872 containerd[1616]: 2025-11-06 00:32:03.175 [INFO][4615] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bb775b858--snbnl-eth0 calico-apiserver-7bb775b858- calico-apiserver 4977dfeb-b401-43e8-996c-8b0f6fd603a7 943 0 2025-11-06 00:31:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bb775b858 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bb775b858-snbnl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali92174ba84b4 [] [] }} ContainerID="1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-snbnl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--snbnl-" Nov 6 00:32:03.541872 containerd[1616]: 2025-11-06 00:32:03.175 [INFO][4615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-snbnl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--snbnl-eth0" Nov 6 00:32:03.541872 containerd[1616]: 2025-11-06 00:32:03.265 [INFO][4629] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" HandleID="k8s-pod-network.1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" Workload="localhost-k8s-calico--apiserver--7bb775b858--snbnl-eth0" Nov 6 00:32:03.541127 systemd-networkd[1508]: vxlan.calico: Gained IPv6LL Nov 6 00:32:03.542355 containerd[1616]: 2025-11-06 00:32:03.265 [INFO][4629] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" HandleID="k8s-pod-network.1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" Workload="localhost-k8s-calico--apiserver--7bb775b858--snbnl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000130740), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7bb775b858-snbnl", "timestamp":"2025-11-06 00:32:03.265345572 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:32:03.542355 containerd[1616]: 2025-11-06 00:32:03.265 [INFO][4629] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:32:03.542355 containerd[1616]: 2025-11-06 00:32:03.265 [INFO][4629] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:32:03.542355 containerd[1616]: 2025-11-06 00:32:03.266 [INFO][4629] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:32:03.542355 containerd[1616]: 2025-11-06 00:32:03.280 [INFO][4629] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" host="localhost" Nov 6 00:32:03.542355 containerd[1616]: 2025-11-06 00:32:03.300 [INFO][4629] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:32:03.542355 containerd[1616]: 2025-11-06 00:32:03.323 [INFO][4629] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:32:03.542355 containerd[1616]: 2025-11-06 00:32:03.337 [INFO][4629] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:03.542355 containerd[1616]: 2025-11-06 00:32:03.345 [INFO][4629] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:03.542355 containerd[1616]: 2025-11-06 00:32:03.345 [INFO][4629] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" host="localhost" Nov 6 00:32:03.542757 containerd[1616]: 2025-11-06 00:32:03.351 [INFO][4629] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc Nov 6 00:32:03.542757 containerd[1616]: 2025-11-06 00:32:03.367 [INFO][4629] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" host="localhost" Nov 6 00:32:03.542757 containerd[1616]: 2025-11-06 00:32:03.385 [INFO][4629] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" host="localhost" Nov 6 00:32:03.542757 containerd[1616]: 2025-11-06 00:32:03.387 [INFO][4629] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" host="localhost" Nov 6 00:32:03.542757 containerd[1616]: 2025-11-06 00:32:03.387 [INFO][4629] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:32:03.542757 containerd[1616]: 2025-11-06 00:32:03.387 [INFO][4629] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" HandleID="k8s-pod-network.1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" Workload="localhost-k8s-calico--apiserver--7bb775b858--snbnl-eth0" Nov 6 00:32:03.542993 containerd[1616]: 2025-11-06 00:32:03.398 [INFO][4615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-snbnl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--snbnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bb775b858--snbnl-eth0", GenerateName:"calico-apiserver-7bb775b858-", Namespace:"calico-apiserver", SelfLink:"", UID:"4977dfeb-b401-43e8-996c-8b0f6fd603a7", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 31, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb775b858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bb775b858-snbnl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92174ba84b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:03.543078 containerd[1616]: 2025-11-06 00:32:03.400 [INFO][4615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-snbnl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--snbnl-eth0" Nov 6 00:32:03.543078 containerd[1616]: 2025-11-06 00:32:03.400 [INFO][4615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92174ba84b4 ContainerID="1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-snbnl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--snbnl-eth0" Nov 6 00:32:03.543078 containerd[1616]: 2025-11-06 00:32:03.434 [INFO][4615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-snbnl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--snbnl-eth0" Nov 6 00:32:03.543197 containerd[1616]: 2025-11-06 00:32:03.461 [INFO][4615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-snbnl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--snbnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bb775b858--snbnl-eth0", GenerateName:"calico-apiserver-7bb775b858-", Namespace:"calico-apiserver", SelfLink:"", UID:"4977dfeb-b401-43e8-996c-8b0f6fd603a7", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 31, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb775b858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc", Pod:"calico-apiserver-7bb775b858-snbnl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92174ba84b4", MAC:"f6:26:20:00:dd:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:03.543283 containerd[1616]: 2025-11-06 00:32:03.518 [INFO][4615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-snbnl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--snbnl-eth0" Nov 6 00:32:03.886446 containerd[1616]: time="2025-11-06T00:32:03.884453409Z" level=info msg="connecting to shim 1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc" address="unix:///run/containerd/s/6192ba1b98276de2bdbf83e84dae20b36133ab753e3fa1e035ab9d202c3edbdd" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:32:03.952273 kubelet[2801]: E1106 00:32:03.951589 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:03.953839 containerd[1616]: time="2025-11-06T00:32:03.953783469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwlzm,Uid:3e3d7027-cc01-4677-b498-d2aaae1cd6f2,Namespace:calico-system,Attempt:0,}" Nov 6 00:32:03.955086 kubelet[2801]: E1106 00:32:03.954914 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:03.955644 containerd[1616]: time="2025-11-06T00:32:03.955258357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2tndd,Uid:82f74817-8797-47bc-b585-b333aafdc3bf,Namespace:kube-system,Attempt:0,}" Nov 6 00:32:03.990348 systemd[1]: Started cri-containerd-1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc.scope - libcontainer container 1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc. Nov 6 00:32:04.019695 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:32:04.305422 containerd[1616]: time="2025-11-06T00:32:04.305352347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb775b858-snbnl,Uid:4977dfeb-b401-43e8-996c-8b0f6fd603a7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1cbf7f340e656809a31a6a1f92c245300278d3dabcd3827bef085db9965859bc\"" Nov 6 00:32:04.329918 containerd[1616]: time="2025-11-06T00:32:04.327582127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:32:04.499609 systemd-networkd[1508]: cali92174ba84b4: Gained IPv6LL Nov 6 00:32:04.700089 systemd-networkd[1508]: calie10ba4cddac: Link UP Nov 6 00:32:04.704991 systemd-networkd[1508]: calie10ba4cddac: Gained carrier Nov 6 00:32:04.714191 containerd[1616]: time="2025-11-06T00:32:04.707176486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:04.726452 containerd[1616]: time="2025-11-06T00:32:04.726303982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:32:04.726856 containerd[1616]: time="2025-11-06T00:32:04.726540822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:32:04.731218 kubelet[2801]: E1106 00:32:04.728212 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:04.731218 kubelet[2801]: E1106 00:32:04.728268 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:04.731218 kubelet[2801]: E1106 00:32:04.728417 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xgbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb775b858-snbnl_calico-apiserver(4977dfeb-b401-43e8-996c-8b0f6fd603a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:04.731218 kubelet[2801]: E1106 00:32:04.730094 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" podUID="4977dfeb-b401-43e8-996c-8b0f6fd603a7" Nov 6 00:32:04.748690 containerd[1616]: 2025-11-06 00:32:04.398 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--2tndd-eth0 coredns-668d6bf9bc- kube-system 82f74817-8797-47bc-b585-b333aafdc3bf 945 0 2025-11-06 00:30:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-2tndd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie10ba4cddac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tndd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2tndd-" Nov 6 00:32:04.748690 containerd[1616]: 2025-11-06 00:32:04.398 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tndd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2tndd-eth0" Nov 6 00:32:04.748690 containerd[1616]: 2025-11-06 00:32:04.517 [INFO][4721] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" HandleID="k8s-pod-network.31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" Workload="localhost-k8s-coredns--668d6bf9bc--2tndd-eth0" Nov 6 00:32:04.749076 containerd[1616]: 2025-11-06 00:32:04.517 [INFO][4721] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" HandleID="k8s-pod-network.31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" Workload="localhost-k8s-coredns--668d6bf9bc--2tndd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024e5d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-2tndd", "timestamp":"2025-11-06 00:32:04.517285663 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:32:04.749076 containerd[1616]: 2025-11-06 00:32:04.517 [INFO][4721] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:32:04.749076 containerd[1616]: 2025-11-06 00:32:04.517 [INFO][4721] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:32:04.749076 containerd[1616]: 2025-11-06 00:32:04.517 [INFO][4721] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:32:04.749076 containerd[1616]: 2025-11-06 00:32:04.541 [INFO][4721] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" host="localhost" Nov 6 00:32:04.749076 containerd[1616]: 2025-11-06 00:32:04.560 [INFO][4721] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:32:04.749076 containerd[1616]: 2025-11-06 00:32:04.574 [INFO][4721] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:32:04.749076 containerd[1616]: 2025-11-06 00:32:04.577 [INFO][4721] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:04.749076 containerd[1616]: 2025-11-06 00:32:04.585 [INFO][4721] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:04.749076 containerd[1616]: 2025-11-06 00:32:04.585 [INFO][4721] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" host="localhost" Nov 6 00:32:04.749374 containerd[1616]: 2025-11-06 00:32:04.595 [INFO][4721] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416 Nov 6 00:32:04.749374 containerd[1616]: 2025-11-06 00:32:04.611 [INFO][4721] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" host="localhost" Nov 6 00:32:04.749374 containerd[1616]: 2025-11-06 00:32:04.668 [INFO][4721] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" host="localhost" Nov 6 00:32:04.749374 containerd[1616]: 2025-11-06 00:32:04.668 [INFO][4721] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" host="localhost" Nov 6 00:32:04.749374 containerd[1616]: 2025-11-06 00:32:04.669 [INFO][4721] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:32:04.749374 containerd[1616]: 2025-11-06 00:32:04.669 [INFO][4721] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" HandleID="k8s-pod-network.31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" Workload="localhost-k8s-coredns--668d6bf9bc--2tndd-eth0" Nov 6 00:32:04.749540 containerd[1616]: 2025-11-06 00:32:04.686 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tndd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2tndd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2tndd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"82f74817-8797-47bc-b585-b333aafdc3bf", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-2tndd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie10ba4cddac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:04.749632 containerd[1616]: 2025-11-06 00:32:04.686 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tndd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2tndd-eth0" Nov 6 00:32:04.749632 containerd[1616]: 2025-11-06 00:32:04.687 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie10ba4cddac ContainerID="31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tndd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2tndd-eth0" Nov 6 00:32:04.749632 containerd[1616]: 2025-11-06 00:32:04.706 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tndd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2tndd-eth0" Nov 6 00:32:04.749728 containerd[1616]: 2025-11-06 00:32:04.707 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tndd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2tndd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2tndd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"82f74817-8797-47bc-b585-b333aafdc3bf", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416", Pod:"coredns-668d6bf9bc-2tndd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie10ba4cddac", MAC:"3a:9b:0f:a5:96:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:04.749728 containerd[1616]: 2025-11-06 00:32:04.743 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tndd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2tndd-eth0" Nov 6 00:32:04.785263 kubelet[2801]: E1106 00:32:04.785200 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" podUID="4977dfeb-b401-43e8-996c-8b0f6fd603a7" Nov 6 00:32:04.857085 containerd[1616]: time="2025-11-06T00:32:04.857011620Z" level=info msg="connecting to shim 31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416" address="unix:///run/containerd/s/41fbf2315f225c02ad4cbdc568eada0064d274e9cbc7e7050693cf445627aea3" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:32:04.912289 systemd-networkd[1508]: calibd9e563d5a3: Link UP Nov 6 00:32:04.917244 systemd-networkd[1508]: calibd9e563d5a3: Gained carrier Nov 6 00:32:04.930842 systemd[1]: Started cri-containerd-31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416.scope - libcontainer container 31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416. Nov 6 00:32:04.954422 containerd[1616]: time="2025-11-06T00:32:04.954203407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb775b858-p9q9w,Uid:23a4e2d6-7e35-4d28-a47f-d87913358f1f,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.400 [INFO][4697] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xwlzm-eth0 csi-node-driver- calico-system 3e3d7027-cc01-4677-b498-d2aaae1cd6f2 796 0 2025-11-06 00:31:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xwlzm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibd9e563d5a3 [] [] }} ContainerID="5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" Namespace="calico-system" Pod="csi-node-driver-xwlzm" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwlzm-" Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.411 [INFO][4697] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" Namespace="calico-system" Pod="csi-node-driver-xwlzm" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwlzm-eth0" Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.533 [INFO][4724] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" HandleID="k8s-pod-network.5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" Workload="localhost-k8s-csi--node--driver--xwlzm-eth0" Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.533 [INFO][4724] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" HandleID="k8s-pod-network.5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" Workload="localhost-k8s-csi--node--driver--xwlzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xwlzm", "timestamp":"2025-11-06 00:32:04.533510765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.533 [INFO][4724] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.669 [INFO][4724] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.669 [INFO][4724] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.728 [INFO][4724] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" host="localhost" Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.770 [INFO][4724] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.813 [INFO][4724] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.832 [INFO][4724] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.843 [INFO][4724] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.843 [INFO][4724] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" host="localhost" Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.854 [INFO][4724] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04 Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.876 [INFO][4724] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" host="localhost" Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.900 [INFO][4724] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" host="localhost" Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.900 [INFO][4724] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" host="localhost" Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.900 [INFO][4724] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:32:04.965063 containerd[1616]: 2025-11-06 00:32:04.900 [INFO][4724] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" HandleID="k8s-pod-network.5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" Workload="localhost-k8s-csi--node--driver--xwlzm-eth0" Nov 6 00:32:04.965932 containerd[1616]: 2025-11-06 00:32:04.907 [INFO][4697] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" Namespace="calico-system" Pod="csi-node-driver-xwlzm" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwlzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xwlzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e3d7027-cc01-4677-b498-d2aaae1cd6f2", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 31, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xwlzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd9e563d5a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:04.965932 containerd[1616]: 2025-11-06 00:32:04.908 [INFO][4697] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" Namespace="calico-system" Pod="csi-node-driver-xwlzm" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwlzm-eth0" Nov 6 00:32:04.965932 containerd[1616]: 2025-11-06 00:32:04.908 [INFO][4697] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd9e563d5a3 ContainerID="5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" Namespace="calico-system" Pod="csi-node-driver-xwlzm" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwlzm-eth0" Nov 6 00:32:04.965932 containerd[1616]: 2025-11-06 00:32:04.912 [INFO][4697] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" Namespace="calico-system" Pod="csi-node-driver-xwlzm" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwlzm-eth0" Nov 6 00:32:04.965932 containerd[1616]: 2025-11-06 00:32:04.914 [INFO][4697] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" Namespace="calico-system" Pod="csi-node-driver-xwlzm" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwlzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xwlzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e3d7027-cc01-4677-b498-d2aaae1cd6f2", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 31, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04", Pod:"csi-node-driver-xwlzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd9e563d5a3", MAC:"e2:b0:5b:29:eb:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:04.965932 containerd[1616]: 2025-11-06 00:32:04.948 [INFO][4697] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" Namespace="calico-system" Pod="csi-node-driver-xwlzm" WorkloadEndpoint="localhost-k8s-csi--node--driver--xwlzm-eth0" Nov 6 00:32:04.997664 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:32:05.098451 containerd[1616]: time="2025-11-06T00:32:05.097872667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2tndd,Uid:82f74817-8797-47bc-b585-b333aafdc3bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416\"" Nov 6 00:32:05.105957 kubelet[2801]: E1106 00:32:05.105146 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:05.129467 containerd[1616]: time="2025-11-06T00:32:05.129157363Z" level=info msg="CreateContainer within sandbox \"31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:32:05.139984 containerd[1616]: time="2025-11-06T00:32:05.139905834Z" level=info msg="connecting to shim 5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04" address="unix:///run/containerd/s/6e3fd2c376c3eb5ad776fcf348ae12153c54c1ee34953d9da610153c23df48f8" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:32:05.233349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3272201135.mount: Deactivated successfully. Nov 6 00:32:05.242306 containerd[1616]: time="2025-11-06T00:32:05.242139235Z" level=info msg="Container c3a7aeb1afdef808355218a212e17bcc9f4877ebfaee042149d0753b2bf39534: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:32:05.254278 systemd[1]: Started cri-containerd-5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04.scope - libcontainer container 5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04. Nov 6 00:32:05.263593 containerd[1616]: time="2025-11-06T00:32:05.263490952Z" level=info msg="CreateContainer within sandbox \"31049d55f4b3829af27c76b61a2dcb9e0094cab2c6d34c5a28b4188da6fbc416\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c3a7aeb1afdef808355218a212e17bcc9f4877ebfaee042149d0753b2bf39534\"" Nov 6 00:32:05.265413 containerd[1616]: time="2025-11-06T00:32:05.265238647Z" level=info msg="StartContainer for \"c3a7aeb1afdef808355218a212e17bcc9f4877ebfaee042149d0753b2bf39534\"" Nov 6 00:32:05.268753 containerd[1616]: time="2025-11-06T00:32:05.268661669Z" level=info msg="connecting to shim c3a7aeb1afdef808355218a212e17bcc9f4877ebfaee042149d0753b2bf39534" address="unix:///run/containerd/s/41fbf2315f225c02ad4cbdc568eada0064d274e9cbc7e7050693cf445627aea3" protocol=ttrpc version=3 Nov 6 00:32:05.310981 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:32:05.354256 systemd[1]: Started cri-containerd-c3a7aeb1afdef808355218a212e17bcc9f4877ebfaee042149d0753b2bf39534.scope - libcontainer container c3a7aeb1afdef808355218a212e17bcc9f4877ebfaee042149d0753b2bf39534. Nov 6 00:32:05.377209 containerd[1616]: time="2025-11-06T00:32:05.377134231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwlzm,Uid:3e3d7027-cc01-4677-b498-d2aaae1cd6f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"5eaf2fbf04d7ee78caddb353e1cd8be833a01e8e8cc4c5396de644f8999e9e04\"" Nov 6 00:32:05.385002 containerd[1616]: time="2025-11-06T00:32:05.384586411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:32:05.443958 systemd-networkd[1508]: calic55ca3f5576: Link UP Nov 6 00:32:05.464480 systemd-networkd[1508]: calic55ca3f5576: Gained carrier Nov 6 00:32:05.487587 containerd[1616]: time="2025-11-06T00:32:05.484269278Z" level=info msg="StartContainer for \"c3a7aeb1afdef808355218a212e17bcc9f4877ebfaee042149d0753b2bf39534\" returns successfully" Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.080 [INFO][4800] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bb775b858--p9q9w-eth0 calico-apiserver-7bb775b858- calico-apiserver 23a4e2d6-7e35-4d28-a47f-d87913358f1f 947 0 2025-11-06 00:31:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bb775b858 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bb775b858-p9q9w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic55ca3f5576 [] [] }} ContainerID="ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-p9q9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--p9q9w-" Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.080 [INFO][4800] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-p9q9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--p9q9w-eth0" Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.231 [INFO][4833] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" HandleID="k8s-pod-network.ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" Workload="localhost-k8s-calico--apiserver--7bb775b858--p9q9w-eth0" Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.232 [INFO][4833] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" HandleID="k8s-pod-network.ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" Workload="localhost-k8s-calico--apiserver--7bb775b858--p9q9w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7bb775b858-p9q9w", "timestamp":"2025-11-06 00:32:05.23164092 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.232 [INFO][4833] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.232 [INFO][4833] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.232 [INFO][4833] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.252 [INFO][4833] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" host="localhost" Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.279 [INFO][4833] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.310 [INFO][4833] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.329 [INFO][4833] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.347 [INFO][4833] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.348 [INFO][4833] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" host="localhost" Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.357 [INFO][4833] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.381 [INFO][4833] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" host="localhost" Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.414 [INFO][4833] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" host="localhost" Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.414 [INFO][4833] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" host="localhost" Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.414 [INFO][4833] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:32:05.524255 containerd[1616]: 2025-11-06 00:32:05.414 [INFO][4833] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" HandleID="k8s-pod-network.ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" Workload="localhost-k8s-calico--apiserver--7bb775b858--p9q9w-eth0" Nov 6 00:32:05.525508 containerd[1616]: 2025-11-06 00:32:05.430 [INFO][4800] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-p9q9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--p9q9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bb775b858--p9q9w-eth0", GenerateName:"calico-apiserver-7bb775b858-", Namespace:"calico-apiserver", SelfLink:"", UID:"23a4e2d6-7e35-4d28-a47f-d87913358f1f", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 31, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb775b858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bb775b858-p9q9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic55ca3f5576", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:05.525508 containerd[1616]: 2025-11-06 00:32:05.430 [INFO][4800] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-p9q9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--p9q9w-eth0" Nov 6 00:32:05.525508 containerd[1616]: 2025-11-06 00:32:05.431 [INFO][4800] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic55ca3f5576 ContainerID="ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-p9q9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--p9q9w-eth0" Nov 6 00:32:05.525508 containerd[1616]: 2025-11-06 00:32:05.445 [INFO][4800] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-p9q9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--p9q9w-eth0" Nov 6 00:32:05.525508 containerd[1616]: 2025-11-06 00:32:05.473 [INFO][4800] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-p9q9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--p9q9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bb775b858--p9q9w-eth0", GenerateName:"calico-apiserver-7bb775b858-", Namespace:"calico-apiserver", SelfLink:"", UID:"23a4e2d6-7e35-4d28-a47f-d87913358f1f", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 31, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb775b858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d", Pod:"calico-apiserver-7bb775b858-p9q9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic55ca3f5576", MAC:"9e:53:2a:bd:92:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:05.525508 containerd[1616]: 2025-11-06 00:32:05.514 [INFO][4800] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb775b858-p9q9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bb775b858--p9q9w-eth0" Nov 6 00:32:05.626998 containerd[1616]: time="2025-11-06T00:32:05.625528194Z" level=info msg="connecting to shim ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d" address="unix:///run/containerd/s/ff3beea5556f3e5ab5f578bb4eb9a2c310f3d77937ab5b914c6aee3a2f454508" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:32:05.746332 systemd[1]: Started cri-containerd-ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d.scope - libcontainer container ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d. Nov 6 00:32:05.780856 containerd[1616]: time="2025-11-06T00:32:05.780792678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:05.790889 kubelet[2801]: E1106 00:32:05.784795 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" podUID="4977dfeb-b401-43e8-996c-8b0f6fd603a7" Nov 6 00:32:05.790889 kubelet[2801]: E1106 00:32:05.785995 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:05.828152 containerd[1616]: time="2025-11-06T00:32:05.826349651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:32:05.828152 containerd[1616]: time="2025-11-06T00:32:05.826496901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:32:05.828306 kubelet[2801]: E1106 00:32:05.826721 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:32:05.830971 kubelet[2801]: E1106 00:32:05.829927 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:32:05.830971 kubelet[2801]: E1106 00:32:05.830147 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bxgzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xwlzm_calico-system(3e3d7027-cc01-4677-b498-d2aaae1cd6f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:05.833134 containerd[1616]: time="2025-11-06T00:32:05.833094784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:32:05.838922 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:32:05.843222 systemd-networkd[1508]: calie10ba4cddac: Gained IPv6LL Nov 6 00:32:05.961003 containerd[1616]: time="2025-11-06T00:32:05.960488328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-v9zqm,Uid:e8483869-a3c9-4d7b-858a-1505af0fb5d9,Namespace:calico-system,Attempt:0,}" Nov 6 00:32:06.056486 containerd[1616]: time="2025-11-06T00:32:06.053861590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb775b858-p9q9w,Uid:23a4e2d6-7e35-4d28-a47f-d87913358f1f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ed4beece2a421ad34495c9c08a36c7317ebb97ca936c75d9483062ec0d43668d\"" Nov 6 00:32:06.126964 kubelet[2801]: I1106 00:32:06.126743 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2tndd" podStartSLOduration=74.126717421 podStartE2EDuration="1m14.126717421s" podCreationTimestamp="2025-11-06 00:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:32:06.126384277 +0000 UTC m=+79.662512796" watchObservedRunningTime="2025-11-06 00:32:06.126717421 +0000 UTC m=+79.662845920" Nov 6 00:32:06.326008 containerd[1616]: time="2025-11-06T00:32:06.325790293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:06.400051 containerd[1616]: time="2025-11-06T00:32:06.399968810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:32:06.400382 containerd[1616]: time="2025-11-06T00:32:06.400360605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:32:06.402962 kubelet[2801]: E1106 00:32:06.401533 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:32:06.402962 kubelet[2801]: E1106 00:32:06.401602 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:32:06.402962 kubelet[2801]: E1106 00:32:06.401870 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bxgzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xwlzm_calico-system(3e3d7027-cc01-4677-b498-d2aaae1cd6f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:06.403243 containerd[1616]: time="2025-11-06T00:32:06.402386399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:32:06.405958 kubelet[2801]: E1106 00:32:06.403680 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:32:06.676550 systemd-networkd[1508]: calibd9e563d5a3: Gained IPv6LL Nov 6 00:32:06.793596 kubelet[2801]: E1106 00:32:06.793549 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:06.803891 kubelet[2801]: E1106 00:32:06.803555 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:32:06.876216 containerd[1616]: time="2025-11-06T00:32:06.876116664Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:06.898152 containerd[1616]: time="2025-11-06T00:32:06.896160771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:32:06.898152 containerd[1616]: time="2025-11-06T00:32:06.896298923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:32:06.898428 kubelet[2801]: E1106 00:32:06.896469 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:06.898428 kubelet[2801]: E1106 00:32:06.896530 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:06.898428 kubelet[2801]: E1106 00:32:06.896724 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qksrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb775b858-p9q9w_calico-apiserver(23a4e2d6-7e35-4d28-a47f-d87913358f1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:06.899147 kubelet[2801]: E1106 00:32:06.898846 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" podUID="23a4e2d6-7e35-4d28-a47f-d87913358f1f" Nov 6 00:32:06.971591 kubelet[2801]: E1106 00:32:06.970793 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:06.999363 containerd[1616]: time="2025-11-06T00:32:06.999298452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6977b59f7b-5thqk,Uid:0228b1a2-410c-40ab-86ee-d344f8e34170,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:32:07.030907 containerd[1616]: time="2025-11-06T00:32:07.030323699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589569c468-rpmp8,Uid:52f5f93c-5f24-4f92-88a3-401da8e7e300,Namespace:calico-system,Attempt:0,}" Nov 6 00:32:07.030907 containerd[1616]: time="2025-11-06T00:32:07.030550721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wggz5,Uid:a1313b9d-cae7-480b-9dd6-87cba17dd41d,Namespace:kube-system,Attempt:0,}" Nov 6 00:32:07.341095 systemd-networkd[1508]: calic55ca3f5576: Gained IPv6LL Nov 6 00:32:07.435954 systemd[1]: Started sshd@7-10.0.0.111:22-10.0.0.1:50830.service - OpenSSH per-connection server daemon (10.0.0.1:50830). Nov 6 00:32:07.559274 systemd-networkd[1508]: cali7ee02dd111f: Link UP Nov 6 00:32:07.573988 systemd-networkd[1508]: cali7ee02dd111f: Gained carrier Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:06.608 [INFO][4962] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--v9zqm-eth0 goldmane-666569f655- calico-system e8483869-a3c9-4d7b-858a-1505af0fb5d9 936 0 2025-11-06 00:31:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-v9zqm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7ee02dd111f [] [] }} ContainerID="71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" Namespace="calico-system" Pod="goldmane-666569f655-v9zqm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--v9zqm-" Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:06.609 [INFO][4962] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" Namespace="calico-system" Pod="goldmane-666569f655-v9zqm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--v9zqm-eth0" Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:06.651 [INFO][4979] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" HandleID="k8s-pod-network.71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" Workload="localhost-k8s-goldmane--666569f655--v9zqm-eth0" Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:06.651 [INFO][4979] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" HandleID="k8s-pod-network.71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" Workload="localhost-k8s-goldmane--666569f655--v9zqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-v9zqm", "timestamp":"2025-11-06 00:32:06.65154463 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:06.652 [INFO][4979] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:06.652 [INFO][4979] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:06.652 [INFO][4979] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:06.920 [INFO][4979] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" host="localhost" Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:07.080 [INFO][4979] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:07.207 [INFO][4979] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:07.268 [INFO][4979] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:07.367 [INFO][4979] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:07.367 [INFO][4979] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" host="localhost" Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:07.388 [INFO][4979] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391 Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:07.459 [INFO][4979] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" host="localhost" Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:07.503 [INFO][4979] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" host="localhost" Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:07.503 [INFO][4979] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" host="localhost" Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:07.503 [INFO][4979] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:32:07.671719 containerd[1616]: 2025-11-06 00:32:07.503 [INFO][4979] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" HandleID="k8s-pod-network.71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" Workload="localhost-k8s-goldmane--666569f655--v9zqm-eth0" Nov 6 00:32:07.673270 containerd[1616]: 2025-11-06 00:32:07.526 [INFO][4962] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" Namespace="calico-system" Pod="goldmane-666569f655-v9zqm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--v9zqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--v9zqm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e8483869-a3c9-4d7b-858a-1505af0fb5d9", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 31, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-v9zqm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7ee02dd111f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:07.673270 containerd[1616]: 2025-11-06 00:32:07.526 [INFO][4962] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" Namespace="calico-system" Pod="goldmane-666569f655-v9zqm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--v9zqm-eth0" Nov 6 00:32:07.673270 containerd[1616]: 2025-11-06 00:32:07.526 [INFO][4962] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ee02dd111f ContainerID="71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" Namespace="calico-system" Pod="goldmane-666569f655-v9zqm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--v9zqm-eth0" Nov 6 00:32:07.673270 containerd[1616]: 2025-11-06 00:32:07.575 [INFO][4962] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" Namespace="calico-system" Pod="goldmane-666569f655-v9zqm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--v9zqm-eth0" Nov 6 00:32:07.673270 containerd[1616]: 2025-11-06 00:32:07.583 [INFO][4962] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" Namespace="calico-system" Pod="goldmane-666569f655-v9zqm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--v9zqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--v9zqm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e8483869-a3c9-4d7b-858a-1505af0fb5d9", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 31, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391", Pod:"goldmane-666569f655-v9zqm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7ee02dd111f", MAC:"5a:10:53:4a:4f:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:07.673270 containerd[1616]: 2025-11-06 00:32:07.643 [INFO][4962] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" Namespace="calico-system" Pod="goldmane-666569f655-v9zqm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--v9zqm-eth0" Nov 6 00:32:07.815247 kubelet[2801]: E1106 00:32:07.812849 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" podUID="23a4e2d6-7e35-4d28-a47f-d87913358f1f" Nov 6 00:32:07.913980 sshd[5042]: Accepted publickey for core from 10.0.0.1 port 50830 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:32:07.917191 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:07.927640 containerd[1616]: time="2025-11-06T00:32:07.925476077Z" level=info msg="connecting to shim 71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391" address="unix:///run/containerd/s/2a36f483cbb206a2913209999a0650f4ef3618db3004faed5ed19f379bfaf3d6" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:32:07.951534 systemd-logind[1591]: New session 8 of user core. Nov 6 00:32:07.968403 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:32:08.014832 systemd-networkd[1508]: calif92ab546630: Link UP Nov 6 00:32:08.015152 systemd-networkd[1508]: calif92ab546630: Gained carrier Nov 6 00:32:08.128224 systemd[1]: Started cri-containerd-71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391.scope - libcontainer container 71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391. Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.514 [INFO][5007] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--589569c468--rpmp8-eth0 calico-kube-controllers-589569c468- calico-system 52f5f93c-5f24-4f92-88a3-401da8e7e300 942 0 2025-11-06 00:31:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:589569c468 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-589569c468-rpmp8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif92ab546630 [] [] }} ContainerID="7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" Namespace="calico-system" Pod="calico-kube-controllers-589569c468-rpmp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589569c468--rpmp8-" Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.514 [INFO][5007] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" Namespace="calico-system" Pod="calico-kube-controllers-589569c468-rpmp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589569c468--rpmp8-eth0" Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.754 [INFO][5048] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" HandleID="k8s-pod-network.7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" Workload="localhost-k8s-calico--kube--controllers--589569c468--rpmp8-eth0" Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.754 [INFO][5048] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" HandleID="k8s-pod-network.7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" Workload="localhost-k8s-calico--kube--controllers--589569c468--rpmp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ab7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-589569c468-rpmp8", "timestamp":"2025-11-06 00:32:07.754569488 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.754 [INFO][5048] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.755 [INFO][5048] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.755 [INFO][5048] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.786 [INFO][5048] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" host="localhost" Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.808 [INFO][5048] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.862 [INFO][5048] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.908 [INFO][5048] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.943 [INFO][5048] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.943 [INFO][5048] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" host="localhost" Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.955 [INFO][5048] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.963 [INFO][5048] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" host="localhost" Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.993 [INFO][5048] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" host="localhost" Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.993 [INFO][5048] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" host="localhost" Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.993 [INFO][5048] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:32:08.164191 containerd[1616]: 2025-11-06 00:32:07.993 [INFO][5048] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" HandleID="k8s-pod-network.7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" Workload="localhost-k8s-calico--kube--controllers--589569c468--rpmp8-eth0" Nov 6 00:32:08.165476 containerd[1616]: 2025-11-06 00:32:08.010 [INFO][5007] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" Namespace="calico-system" Pod="calico-kube-controllers-589569c468-rpmp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589569c468--rpmp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--589569c468--rpmp8-eth0", GenerateName:"calico-kube-controllers-589569c468-", Namespace:"calico-system", SelfLink:"", UID:"52f5f93c-5f24-4f92-88a3-401da8e7e300", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 31, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589569c468", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-589569c468-rpmp8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif92ab546630", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:08.165476 containerd[1616]: 2025-11-06 00:32:08.011 [INFO][5007] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" Namespace="calico-system" Pod="calico-kube-controllers-589569c468-rpmp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589569c468--rpmp8-eth0" Nov 6 00:32:08.165476 containerd[1616]: 2025-11-06 00:32:08.011 [INFO][5007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif92ab546630 ContainerID="7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" Namespace="calico-system" Pod="calico-kube-controllers-589569c468-rpmp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589569c468--rpmp8-eth0" Nov 6 00:32:08.165476 containerd[1616]: 2025-11-06 00:32:08.017 [INFO][5007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" Namespace="calico-system" Pod="calico-kube-controllers-589569c468-rpmp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589569c468--rpmp8-eth0" Nov 6 00:32:08.165476 containerd[1616]: 2025-11-06 00:32:08.023 [INFO][5007] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" Namespace="calico-system" Pod="calico-kube-controllers-589569c468-rpmp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589569c468--rpmp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--589569c468--rpmp8-eth0", GenerateName:"calico-kube-controllers-589569c468-", Namespace:"calico-system", SelfLink:"", UID:"52f5f93c-5f24-4f92-88a3-401da8e7e300", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 31, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589569c468", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad", Pod:"calico-kube-controllers-589569c468-rpmp8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif92ab546630", MAC:"7e:55:ad:ab:64:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:08.165476 containerd[1616]: 2025-11-06 00:32:08.091 [INFO][5007] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" Namespace="calico-system" Pod="calico-kube-controllers-589569c468-rpmp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589569c468--rpmp8-eth0" Nov 6 00:32:08.209044 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:32:08.339827 containerd[1616]: time="2025-11-06T00:32:08.338272819Z" level=info msg="connecting to shim 7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad" address="unix:///run/containerd/s/9a07c58ea1500bf123294ffa777c8347d7ee3233556f7349795fadac1b5cf23c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:32:08.393864 containerd[1616]: time="2025-11-06T00:32:08.391529060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-v9zqm,Uid:e8483869-a3c9-4d7b-858a-1505af0fb5d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"71f97097243a8461242461b9149d41f899a985355ddd85ef3261316ab73ab391\"" Nov 6 00:32:08.403206 containerd[1616]: time="2025-11-06T00:32:08.398813578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:32:08.422343 systemd-networkd[1508]: cali9768d08148c: Link UP Nov 6 00:32:08.423217 systemd-networkd[1508]: cali9768d08148c: Gained carrier Nov 6 00:32:08.452282 systemd[1]: Started cri-containerd-7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad.scope - libcontainer container 7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad. Nov 6 00:32:08.479113 sshd[5117]: Connection closed by 10.0.0.1 port 50830 Nov 6 00:32:08.479133 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:07.534 [INFO][5025] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wggz5-eth0 coredns-668d6bf9bc- kube-system a1313b9d-cae7-480b-9dd6-87cba17dd41d 946 0 2025-11-06 00:30:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wggz5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9768d08148c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" Namespace="kube-system" Pod="coredns-668d6bf9bc-wggz5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wggz5-" Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:07.534 [INFO][5025] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" Namespace="kube-system" Pod="coredns-668d6bf9bc-wggz5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wggz5-eth0" Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:07.786 [INFO][5054] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" HandleID="k8s-pod-network.9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" Workload="localhost-k8s-coredns--668d6bf9bc--wggz5-eth0" Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:07.789 [INFO][5054] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" HandleID="k8s-pod-network.9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" Workload="localhost-k8s-coredns--668d6bf9bc--wggz5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00049dbe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wggz5", "timestamp":"2025-11-06 00:32:07.786420164 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:07.790 [INFO][5054] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:07.999 [INFO][5054] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:07.999 [INFO][5054] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:08.114 [INFO][5054] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" host="localhost" Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:08.162 [INFO][5054] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:08.244 [INFO][5054] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:08.259 [INFO][5054] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:08.280 [INFO][5054] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:08.281 [INFO][5054] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" host="localhost" Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:08.295 [INFO][5054] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812 Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:08.331 [INFO][5054] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" host="localhost" Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:08.376 [INFO][5054] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" host="localhost" Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:08.377 [INFO][5054] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" host="localhost" Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:08.377 [INFO][5054] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:32:08.492412 containerd[1616]: 2025-11-06 00:32:08.377 [INFO][5054] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" HandleID="k8s-pod-network.9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" Workload="localhost-k8s-coredns--668d6bf9bc--wggz5-eth0" Nov 6 00:32:08.497593 containerd[1616]: 2025-11-06 00:32:08.407 [INFO][5025] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" Namespace="kube-system" Pod="coredns-668d6bf9bc-wggz5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wggz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wggz5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a1313b9d-cae7-480b-9dd6-87cba17dd41d", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wggz5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9768d08148c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:08.497593 containerd[1616]: 2025-11-06 00:32:08.408 [INFO][5025] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" Namespace="kube-system" Pod="coredns-668d6bf9bc-wggz5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wggz5-eth0" Nov 6 00:32:08.497593 containerd[1616]: 2025-11-06 00:32:08.409 [INFO][5025] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9768d08148c ContainerID="9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" Namespace="kube-system" Pod="coredns-668d6bf9bc-wggz5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wggz5-eth0" Nov 6 00:32:08.497593 containerd[1616]: 2025-11-06 00:32:08.420 [INFO][5025] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" Namespace="kube-system" Pod="coredns-668d6bf9bc-wggz5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wggz5-eth0" Nov 6 00:32:08.497593 containerd[1616]: 2025-11-06 00:32:08.428 [INFO][5025] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" Namespace="kube-system" Pod="coredns-668d6bf9bc-wggz5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wggz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wggz5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a1313b9d-cae7-480b-9dd6-87cba17dd41d", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812", Pod:"coredns-668d6bf9bc-wggz5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9768d08148c", MAC:"3e:a2:d8:2a:b4:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:08.497593 containerd[1616]: 2025-11-06 00:32:08.468 [INFO][5025] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" Namespace="kube-system" Pod="coredns-668d6bf9bc-wggz5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wggz5-eth0" Nov 6 00:32:08.495167 systemd[1]: sshd@7-10.0.0.111:22-10.0.0.1:50830.service: Deactivated successfully. Nov 6 00:32:08.501551 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:32:08.507783 systemd-logind[1591]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:32:08.515857 systemd-logind[1591]: Removed session 8. Nov 6 00:32:08.529642 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:32:08.613571 systemd-networkd[1508]: cali1c5988b4144: Link UP Nov 6 00:32:08.620655 systemd-networkd[1508]: cali1c5988b4144: Gained carrier Nov 6 00:32:08.640808 containerd[1616]: time="2025-11-06T00:32:08.640638020Z" level=info msg="connecting to shim 9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812" address="unix:///run/containerd/s/be2b1169d21d7a78f5fcd32003eebe59c52ea2134155c587c2591365ee051917" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:07.481 [INFO][4996] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6977b59f7b--5thqk-eth0 calico-apiserver-6977b59f7b- calico-apiserver 0228b1a2-410c-40ab-86ee-d344f8e34170 933 0 2025-11-06 00:31:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6977b59f7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6977b59f7b-5thqk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1c5988b4144 [] [] }} ContainerID="a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" Namespace="calico-apiserver" Pod="calico-apiserver-6977b59f7b-5thqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6977b59f7b--5thqk-" Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:07.483 [INFO][4996] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" Namespace="calico-apiserver" Pod="calico-apiserver-6977b59f7b-5thqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6977b59f7b--5thqk-eth0" Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:07.801 [INFO][5056] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" HandleID="k8s-pod-network.a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" Workload="localhost-k8s-calico--apiserver--6977b59f7b--5thqk-eth0" Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:07.801 [INFO][5056] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" HandleID="k8s-pod-network.a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" Workload="localhost-k8s-calico--apiserver--6977b59f7b--5thqk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00057f560), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6977b59f7b-5thqk", "timestamp":"2025-11-06 00:32:07.801062897 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:07.801 [INFO][5056] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:08.377 [INFO][5056] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:08.377 [INFO][5056] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:08.445 [INFO][5056] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" host="localhost" Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:08.478 [INFO][5056] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:08.499 [INFO][5056] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:08.507 [INFO][5056] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:08.520 [INFO][5056] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:08.520 [INFO][5056] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" host="localhost" Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:08.526 [INFO][5056] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5 Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:08.542 [INFO][5056] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" host="localhost" Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:08.591 [INFO][5056] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" host="localhost" Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:08.591 [INFO][5056] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" host="localhost" Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:08.591 [INFO][5056] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:32:08.671161 containerd[1616]: 2025-11-06 00:32:08.591 [INFO][5056] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" HandleID="k8s-pod-network.a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" Workload="localhost-k8s-calico--apiserver--6977b59f7b--5thqk-eth0" Nov 6 00:32:08.673356 containerd[1616]: 2025-11-06 00:32:08.601 [INFO][4996] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" Namespace="calico-apiserver" Pod="calico-apiserver-6977b59f7b-5thqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6977b59f7b--5thqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6977b59f7b--5thqk-eth0", GenerateName:"calico-apiserver-6977b59f7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0228b1a2-410c-40ab-86ee-d344f8e34170", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 31, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6977b59f7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6977b59f7b-5thqk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c5988b4144", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:08.673356 containerd[1616]: 2025-11-06 00:32:08.602 [INFO][4996] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" Namespace="calico-apiserver" Pod="calico-apiserver-6977b59f7b-5thqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6977b59f7b--5thqk-eth0" Nov 6 00:32:08.673356 containerd[1616]: 2025-11-06 00:32:08.602 [INFO][4996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c5988b4144 ContainerID="a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" Namespace="calico-apiserver" Pod="calico-apiserver-6977b59f7b-5thqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6977b59f7b--5thqk-eth0" Nov 6 00:32:08.673356 containerd[1616]: 2025-11-06 00:32:08.619 [INFO][4996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" Namespace="calico-apiserver" Pod="calico-apiserver-6977b59f7b-5thqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6977b59f7b--5thqk-eth0" Nov 6 00:32:08.673356 containerd[1616]: 2025-11-06 00:32:08.622 [INFO][4996] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" Namespace="calico-apiserver" Pod="calico-apiserver-6977b59f7b-5thqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6977b59f7b--5thqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6977b59f7b--5thqk-eth0", GenerateName:"calico-apiserver-6977b59f7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0228b1a2-410c-40ab-86ee-d344f8e34170", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 31, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6977b59f7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5", Pod:"calico-apiserver-6977b59f7b-5thqk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c5988b4144", MAC:"0a:bc:f7:11:bc:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:32:08.673356 containerd[1616]: 2025-11-06 00:32:08.656 [INFO][4996] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" Namespace="calico-apiserver" Pod="calico-apiserver-6977b59f7b-5thqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6977b59f7b--5thqk-eth0" Nov 6 00:32:08.739505 containerd[1616]: time="2025-11-06T00:32:08.739334783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589569c468-rpmp8,Uid:52f5f93c-5f24-4f92-88a3-401da8e7e300,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a550aef4ba4ccb46334a892a80b28e754b07b077020141517ede438e0b0bcad\"" Nov 6 00:32:08.763544 systemd[1]: Started cri-containerd-9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812.scope - libcontainer container 9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812. Nov 6 00:32:08.776224 containerd[1616]: time="2025-11-06T00:32:08.776104152Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:08.776580 containerd[1616]: time="2025-11-06T00:32:08.776411948Z" level=info msg="connecting to shim a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5" address="unix:///run/containerd/s/a35bcd94beac3e9a80d79ebc1c1805e3840053d2ef0d059093903ffc1c651755" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:32:08.778393 containerd[1616]: time="2025-11-06T00:32:08.778332959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:32:08.778552 containerd[1616]: time="2025-11-06T00:32:08.778359510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:32:08.779294 kubelet[2801]: E1106 00:32:08.779216 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:32:08.779294 kubelet[2801]: E1106 00:32:08.779291 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:32:08.780256 containerd[1616]: time="2025-11-06T00:32:08.780219735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:32:08.780597 kubelet[2801]: E1106 00:32:08.780524 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69hwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-v9zqm_calico-system(e8483869-a3c9-4d7b-858a-1505af0fb5d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:08.782226 kubelet[2801]: E1106 00:32:08.782165 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v9zqm" podUID="e8483869-a3c9-4d7b-858a-1505af0fb5d9" Nov 6 00:32:08.800981 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:32:08.807384 kubelet[2801]: E1106 00:32:08.807300 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v9zqm" podUID="e8483869-a3c9-4d7b-858a-1505af0fb5d9" Nov 6 00:32:08.823395 systemd[1]: Started cri-containerd-a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5.scope - libcontainer container a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5. Nov 6 00:32:08.850180 systemd-networkd[1508]: cali7ee02dd111f: Gained IPv6LL Nov 6 00:32:08.940837 containerd[1616]: time="2025-11-06T00:32:08.938241697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wggz5,Uid:a1313b9d-cae7-480b-9dd6-87cba17dd41d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812\"" Nov 6 00:32:08.941033 kubelet[2801]: E1106 00:32:08.939797 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:08.948586 containerd[1616]: time="2025-11-06T00:32:08.947081001Z" level=info msg="CreateContainer within sandbox \"9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:32:08.971131 containerd[1616]: time="2025-11-06T00:32:08.971070005Z" level=info msg="Container 57049bd618d281d4869834ff8fe99d9bb03a87d38a52a3bff0f6080dc3570c78: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:32:08.981554 containerd[1616]: time="2025-11-06T00:32:08.981492237Z" level=info msg="CreateContainer within sandbox \"9aafb34a8c545aac69311d88376f722f8e5f1f97c93a8ee347242a499ee2e812\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"57049bd618d281d4869834ff8fe99d9bb03a87d38a52a3bff0f6080dc3570c78\"" Nov 6 00:32:08.983816 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:32:08.984806 containerd[1616]: time="2025-11-06T00:32:08.984602019Z" level=info msg="StartContainer for \"57049bd618d281d4869834ff8fe99d9bb03a87d38a52a3bff0f6080dc3570c78\"" Nov 6 00:32:08.990704 containerd[1616]: time="2025-11-06T00:32:08.989951498Z" level=info msg="connecting to shim 57049bd618d281d4869834ff8fe99d9bb03a87d38a52a3bff0f6080dc3570c78" address="unix:///run/containerd/s/be2b1169d21d7a78f5fcd32003eebe59c52ea2134155c587c2591365ee051917" protocol=ttrpc version=3 Nov 6 00:32:09.018073 systemd[1]: Started cri-containerd-57049bd618d281d4869834ff8fe99d9bb03a87d38a52a3bff0f6080dc3570c78.scope - libcontainer container 57049bd618d281d4869834ff8fe99d9bb03a87d38a52a3bff0f6080dc3570c78. Nov 6 00:32:09.043543 containerd[1616]: time="2025-11-06T00:32:09.043433856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6977b59f7b-5thqk,Uid:0228b1a2-410c-40ab-86ee-d344f8e34170,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a05a62c10e160f6c733ad49be12961660f88a7d31115757c62b9de110ae98af5\"" Nov 6 00:32:09.074825 containerd[1616]: time="2025-11-06T00:32:09.074752586Z" level=info msg="StartContainer for \"57049bd618d281d4869834ff8fe99d9bb03a87d38a52a3bff0f6080dc3570c78\" returns successfully" Nov 6 00:32:09.106371 systemd-networkd[1508]: calif92ab546630: Gained IPv6LL Nov 6 00:32:09.136818 containerd[1616]: time="2025-11-06T00:32:09.135823120Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:09.262849 containerd[1616]: time="2025-11-06T00:32:09.262637608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:32:09.262849 containerd[1616]: time="2025-11-06T00:32:09.262655623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:32:09.263699 kubelet[2801]: E1106 00:32:09.263132 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:32:09.263699 kubelet[2801]: E1106 00:32:09.263206 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:32:09.263699 kubelet[2801]: E1106 00:32:09.263539 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtjpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-589569c468-rpmp8_calico-system(52f5f93c-5f24-4f92-88a3-401da8e7e300): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:09.264536 containerd[1616]: time="2025-11-06T00:32:09.263843531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:32:09.264985 kubelet[2801]: E1106 00:32:09.264871 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589569c468-rpmp8" podUID="52f5f93c-5f24-4f92-88a3-401da8e7e300" Nov 6 00:32:09.678968 containerd[1616]: time="2025-11-06T00:32:09.678900657Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:09.729536 containerd[1616]: time="2025-11-06T00:32:09.729409528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:32:09.731087 containerd[1616]: time="2025-11-06T00:32:09.729449754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:32:09.731388 kubelet[2801]: E1106 00:32:09.731240 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:09.731388 kubelet[2801]: E1106 00:32:09.731328 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:09.740750 kubelet[2801]: E1106 00:32:09.740644 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdhnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6977b59f7b-5thqk_calico-apiserver(0228b1a2-410c-40ab-86ee-d344f8e34170): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:09.746576 kubelet[2801]: E1106 00:32:09.746454 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" podUID="0228b1a2-410c-40ab-86ee-d344f8e34170" Nov 6 00:32:09.834353 kubelet[2801]: E1106 00:32:09.834267 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" podUID="0228b1a2-410c-40ab-86ee-d344f8e34170" Nov 6 00:32:09.842831 kubelet[2801]: E1106 00:32:09.840338 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:09.844767 kubelet[2801]: E1106 00:32:09.844678 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589569c468-rpmp8" podUID="52f5f93c-5f24-4f92-88a3-401da8e7e300" Nov 6 00:32:09.850871 kubelet[2801]: E1106 00:32:09.850796 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v9zqm" podUID="e8483869-a3c9-4d7b-858a-1505af0fb5d9" Nov 6 00:32:09.878276 systemd-networkd[1508]: cali9768d08148c: Gained IPv6LL Nov 6 00:32:10.470039 systemd-networkd[1508]: cali1c5988b4144: Gained IPv6LL Nov 6 00:32:10.519328 kubelet[2801]: I1106 00:32:10.519052 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wggz5" podStartSLOduration=78.519023345 podStartE2EDuration="1m18.519023345s" podCreationTimestamp="2025-11-06 00:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:32:10.282455135 +0000 UTC m=+83.818583624" watchObservedRunningTime="2025-11-06 00:32:10.519023345 +0000 UTC m=+84.055151844" Nov 6 00:32:10.844778 kubelet[2801]: E1106 00:32:10.843261 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:10.844778 kubelet[2801]: E1106 00:32:10.844032 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" podUID="0228b1a2-410c-40ab-86ee-d344f8e34170" Nov 6 00:32:11.848899 kubelet[2801]: E1106 00:32:11.847508 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:11.953857 kubelet[2801]: E1106 00:32:11.953110 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:13.509588 systemd[1]: Started sshd@8-10.0.0.111:22-10.0.0.1:48706.service - OpenSSH per-connection server daemon (10.0.0.1:48706). Nov 6 00:32:13.680905 sshd[5375]: Accepted publickey for core from 10.0.0.1 port 48706 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:32:13.689382 sshd-session[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:13.702536 systemd-logind[1591]: New session 9 of user core. Nov 6 00:32:13.723322 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:32:14.148196 sshd[5378]: Connection closed by 10.0.0.1 port 48706 Nov 6 00:32:14.151528 sshd-session[5375]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:14.163270 systemd[1]: sshd@8-10.0.0.111:22-10.0.0.1:48706.service: Deactivated successfully. Nov 6 00:32:14.174693 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:32:14.185625 systemd-logind[1591]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:32:14.190682 systemd-logind[1591]: Removed session 9. Nov 6 00:32:14.959875 containerd[1616]: time="2025-11-06T00:32:14.959531521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:32:15.301994 containerd[1616]: time="2025-11-06T00:32:15.301472662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:15.322974 containerd[1616]: time="2025-11-06T00:32:15.322815121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:32:15.323333 containerd[1616]: time="2025-11-06T00:32:15.323202326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:32:15.323515 kubelet[2801]: E1106 00:32:15.323473 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:32:15.324672 kubelet[2801]: E1106 00:32:15.324019 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:32:15.324672 kubelet[2801]: E1106 00:32:15.324172 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c9e2f274426d4fdcb37983441f1257fa,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25tgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786c755bc5-fhq72_calico-system(d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:15.329441 containerd[1616]: time="2025-11-06T00:32:15.326448325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:32:15.682259 containerd[1616]: time="2025-11-06T00:32:15.682159218Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:15.684589 containerd[1616]: time="2025-11-06T00:32:15.684506662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:32:15.684706 containerd[1616]: time="2025-11-06T00:32:15.684635828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:32:15.684978 kubelet[2801]: E1106 00:32:15.684896 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:32:15.685088 kubelet[2801]: E1106 00:32:15.684987 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:32:15.685219 kubelet[2801]: E1106 00:32:15.685151 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25tgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786c755bc5-fhq72_calico-system(d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:15.686771 kubelet[2801]: E1106 00:32:15.686683 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786c755bc5-fhq72" podUID="d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397" Nov 6 00:32:16.803437 kubelet[2801]: E1106 00:32:16.801340 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:19.172902 systemd[1]: Started sshd@9-10.0.0.111:22-10.0.0.1:48718.service - OpenSSH per-connection server daemon (10.0.0.1:48718). Nov 6 00:32:19.355184 sshd[5402]: Accepted publickey for core from 10.0.0.1 port 48718 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:32:19.362255 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:19.381288 systemd-logind[1591]: New session 10 of user core. Nov 6 00:32:19.399489 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:32:19.717228 sshd[5405]: Connection closed by 10.0.0.1 port 48718 Nov 6 00:32:19.719110 sshd-session[5402]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:19.738535 systemd-logind[1591]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:32:19.739108 systemd[1]: sshd@9-10.0.0.111:22-10.0.0.1:48718.service: Deactivated successfully. Nov 6 00:32:19.742866 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:32:19.745793 systemd-logind[1591]: Removed session 10. Nov 6 00:32:19.960254 kubelet[2801]: E1106 00:32:19.960192 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:19.962516 containerd[1616]: time="2025-11-06T00:32:19.962437705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:32:20.336279 containerd[1616]: time="2025-11-06T00:32:20.335793096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:20.449791 containerd[1616]: time="2025-11-06T00:32:20.449570573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:32:20.452266 containerd[1616]: time="2025-11-06T00:32:20.449766004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:32:20.452366 kubelet[2801]: E1106 00:32:20.450378 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:20.452366 kubelet[2801]: E1106 00:32:20.450447 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:20.452583 kubelet[2801]: E1106 00:32:20.452527 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xgbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb775b858-snbnl_calico-apiserver(4977dfeb-b401-43e8-996c-8b0f6fd603a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:20.453516 containerd[1616]: time="2025-11-06T00:32:20.453440049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:32:20.456002 kubelet[2801]: E1106 00:32:20.455932 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" podUID="4977dfeb-b401-43e8-996c-8b0f6fd603a7" Nov 6 00:32:20.876797 containerd[1616]: time="2025-11-06T00:32:20.875086116Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:20.885629 containerd[1616]: time="2025-11-06T00:32:20.885463106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:32:20.885629 containerd[1616]: time="2025-11-06T00:32:20.885585228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:32:20.890860 kubelet[2801]: E1106 00:32:20.885990 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:20.890860 kubelet[2801]: E1106 00:32:20.886040 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:20.890860 kubelet[2801]: E1106 00:32:20.886188 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qksrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb775b858-p9q9w_calico-apiserver(23a4e2d6-7e35-4d28-a47f-d87913358f1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:20.905775 kubelet[2801]: E1106 00:32:20.891820 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" podUID="23a4e2d6-7e35-4d28-a47f-d87913358f1f" Nov 6 00:32:20.963791 containerd[1616]: time="2025-11-06T00:32:20.962391102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:32:21.314283 containerd[1616]: time="2025-11-06T00:32:21.313366953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:21.337500 containerd[1616]: time="2025-11-06T00:32:21.337271759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:32:21.337500 containerd[1616]: time="2025-11-06T00:32:21.337348604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:32:21.338257 kubelet[2801]: E1106 00:32:21.338207 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:32:21.338779 kubelet[2801]: E1106 00:32:21.338752 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:32:21.339048 kubelet[2801]: E1106 00:32:21.339004 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bxgzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xwlzm_calico-system(3e3d7027-cc01-4677-b498-d2aaae1cd6f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:21.366295 containerd[1616]: time="2025-11-06T00:32:21.365404958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:32:21.753318 containerd[1616]: time="2025-11-06T00:32:21.753096030Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:21.757014 containerd[1616]: time="2025-11-06T00:32:21.756836669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:32:21.757014 containerd[1616]: time="2025-11-06T00:32:21.756951647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:32:21.758118 kubelet[2801]: E1106 00:32:21.757273 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:32:21.758118 kubelet[2801]: E1106 00:32:21.757332 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:32:21.758118 kubelet[2801]: E1106 00:32:21.757458 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bxgzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xwlzm_calico-system(3e3d7027-cc01-4677-b498-d2aaae1cd6f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:21.762208 kubelet[2801]: E1106 00:32:21.758975 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:32:22.960035 containerd[1616]: time="2025-11-06T00:32:22.959921466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:32:23.343226 containerd[1616]: time="2025-11-06T00:32:23.342732987Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:23.347488 containerd[1616]: time="2025-11-06T00:32:23.347278620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:32:23.347488 containerd[1616]: time="2025-11-06T00:32:23.347409547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:32:23.351674 kubelet[2801]: E1106 00:32:23.348899 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:32:23.351674 kubelet[2801]: E1106 00:32:23.348981 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:32:23.351674 kubelet[2801]: E1106 00:32:23.349225 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtjpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-589569c468-rpmp8_calico-system(52f5f93c-5f24-4f92-88a3-401da8e7e300): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:23.352466 containerd[1616]: time="2025-11-06T00:32:23.350990742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:32:23.357998 kubelet[2801]: E1106 00:32:23.356487 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589569c468-rpmp8" podUID="52f5f93c-5f24-4f92-88a3-401da8e7e300" Nov 6 00:32:23.749177 containerd[1616]: time="2025-11-06T00:32:23.749076066Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:23.758589 containerd[1616]: time="2025-11-06T00:32:23.758289342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:32:23.759875 containerd[1616]: time="2025-11-06T00:32:23.758370124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:32:23.759965 kubelet[2801]: E1106 00:32:23.759030 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:32:23.759965 kubelet[2801]: E1106 00:32:23.759099 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:32:23.759965 kubelet[2801]: E1106 00:32:23.759282 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69hwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-v9zqm_calico-system(e8483869-a3c9-4d7b-858a-1505af0fb5d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:23.760608 kubelet[2801]: E1106 00:32:23.760523 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v9zqm" podUID="e8483869-a3c9-4d7b-858a-1505af0fb5d9" Nov 6 00:32:23.953828 kubelet[2801]: E1106 00:32:23.951359 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:24.745973 systemd[1]: Started sshd@10-10.0.0.111:22-10.0.0.1:42928.service - OpenSSH per-connection server daemon (10.0.0.1:42928). Nov 6 00:32:24.864698 sshd[5427]: Accepted publickey for core from 10.0.0.1 port 42928 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:32:24.864913 sshd-session[5427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:24.890982 systemd-logind[1591]: New session 11 of user core. Nov 6 00:32:24.905258 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:32:24.956719 containerd[1616]: time="2025-11-06T00:32:24.956670713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:32:25.091919 sshd[5430]: Connection closed by 10.0.0.1 port 42928 Nov 6 00:32:25.090330 sshd-session[5427]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:25.104522 systemd[1]: sshd@10-10.0.0.111:22-10.0.0.1:42928.service: Deactivated successfully. Nov 6 00:32:25.109318 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:32:25.112470 systemd-logind[1591]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:32:25.116308 systemd-logind[1591]: Removed session 11. Nov 6 00:32:25.312739 containerd[1616]: time="2025-11-06T00:32:25.311120922Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:25.316708 containerd[1616]: time="2025-11-06T00:32:25.315070893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:32:25.316708 containerd[1616]: time="2025-11-06T00:32:25.315117140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:32:25.316924 kubelet[2801]: E1106 00:32:25.315369 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:25.316924 kubelet[2801]: E1106 00:32:25.315426 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:25.316924 kubelet[2801]: E1106 00:32:25.315577 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdhnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6977b59f7b-5thqk_calico-apiserver(0228b1a2-410c-40ab-86ee-d344f8e34170): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:25.318105 kubelet[2801]: E1106 00:32:25.317886 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" podUID="0228b1a2-410c-40ab-86ee-d344f8e34170" Nov 6 00:32:27.953604 kubelet[2801]: E1106 00:32:27.953545 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786c755bc5-fhq72" podUID="d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397" Nov 6 00:32:28.815025 containerd[1616]: time="2025-11-06T00:32:28.814976413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f74d929ed53749c7d83123cb12c3bd58dafcd96f2b0deef6eb7b25ac9c2c82c8\" id:\"4801d358480c9413b44357a34f8066abc497c48450c8e2d25bd2dc0fa80a54e1\" pid:5455 exited_at:{seconds:1762389148 nanos:814241912}" Nov 6 00:32:28.819493 kubelet[2801]: E1106 00:32:28.819438 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:32:30.109371 systemd[1]: Started sshd@11-10.0.0.111:22-10.0.0.1:56006.service - OpenSSH per-connection server daemon (10.0.0.1:56006). Nov 6 00:32:30.177277 sshd[5471]: Accepted publickey for core from 10.0.0.1 port 56006 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:32:30.179141 sshd-session[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:30.185144 systemd-logind[1591]: New session 12 of user core. Nov 6 00:32:30.195445 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:32:30.358368 sshd[5474]: Connection closed by 10.0.0.1 port 56006 Nov 6 00:32:30.358757 sshd-session[5471]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:30.364359 systemd[1]: sshd@11-10.0.0.111:22-10.0.0.1:56006.service: Deactivated successfully. Nov 6 00:32:30.367218 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:32:30.368238 systemd-logind[1591]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:32:30.370060 systemd-logind[1591]: Removed session 12. Nov 6 00:32:31.953113 kubelet[2801]: E1106 00:32:31.953026 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" podUID="23a4e2d6-7e35-4d28-a47f-d87913358f1f" Nov 6 00:32:31.953113 kubelet[2801]: E1106 00:32:31.953095 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" podUID="4977dfeb-b401-43e8-996c-8b0f6fd603a7" Nov 6 00:32:33.952886 kubelet[2801]: E1106 00:32:33.952762 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v9zqm" podUID="e8483869-a3c9-4d7b-858a-1505af0fb5d9" Nov 6 00:32:34.952954 kubelet[2801]: E1106 00:32:34.952872 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589569c468-rpmp8" podUID="52f5f93c-5f24-4f92-88a3-401da8e7e300" Nov 6 00:32:35.374131 systemd[1]: Started sshd@12-10.0.0.111:22-10.0.0.1:56018.service - OpenSSH per-connection server daemon (10.0.0.1:56018). Nov 6 00:32:35.445511 sshd[5491]: Accepted publickey for core from 10.0.0.1 port 56018 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:32:35.447899 sshd-session[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:35.454220 systemd-logind[1591]: New session 13 of user core. Nov 6 00:32:35.466453 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:32:35.614449 sshd[5494]: Connection closed by 10.0.0.1 port 56018 Nov 6 00:32:35.615747 sshd-session[5491]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:35.635898 systemd[1]: sshd@12-10.0.0.111:22-10.0.0.1:56018.service: Deactivated successfully. Nov 6 00:32:35.640058 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:32:35.643054 systemd-logind[1591]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:32:35.645646 systemd[1]: Started sshd@13-10.0.0.111:22-10.0.0.1:56028.service - OpenSSH per-connection server daemon (10.0.0.1:56028). Nov 6 00:32:35.647986 systemd-logind[1591]: Removed session 13. Nov 6 00:32:35.723143 sshd[5508]: Accepted publickey for core from 10.0.0.1 port 56028 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:32:35.725242 sshd-session[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:35.731461 systemd-logind[1591]: New session 14 of user core. Nov 6 00:32:35.744279 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:32:35.894900 sshd[5511]: Connection closed by 10.0.0.1 port 56028 Nov 6 00:32:35.895301 sshd-session[5508]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:35.913353 systemd[1]: sshd@13-10.0.0.111:22-10.0.0.1:56028.service: Deactivated successfully. Nov 6 00:32:35.917159 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:32:35.919234 systemd-logind[1591]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:32:35.923725 systemd[1]: Started sshd@14-10.0.0.111:22-10.0.0.1:56034.service - OpenSSH per-connection server daemon (10.0.0.1:56034). Nov 6 00:32:35.924729 systemd-logind[1591]: Removed session 14. Nov 6 00:32:35.952963 kubelet[2801]: E1106 00:32:35.952842 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" podUID="0228b1a2-410c-40ab-86ee-d344f8e34170" Nov 6 00:32:35.982751 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 56034 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:32:35.984641 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:35.989593 systemd-logind[1591]: New session 15 of user core. Nov 6 00:32:35.998084 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:32:36.121068 sshd[5526]: Connection closed by 10.0.0.1 port 56034 Nov 6 00:32:36.121410 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:36.126914 systemd[1]: sshd@14-10.0.0.111:22-10.0.0.1:56034.service: Deactivated successfully. Nov 6 00:32:36.129195 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:32:36.130135 systemd-logind[1591]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:32:36.131372 systemd-logind[1591]: Removed session 15. Nov 6 00:32:36.955763 kubelet[2801]: E1106 00:32:36.955708 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:32:40.953069 containerd[1616]: time="2025-11-06T00:32:40.953010773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:32:41.141604 systemd[1]: Started sshd@15-10.0.0.111:22-10.0.0.1:39108.service - OpenSSH per-connection server daemon (10.0.0.1:39108). Nov 6 00:32:41.205538 sshd[5539]: Accepted publickey for core from 10.0.0.1 port 39108 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:32:41.208989 sshd-session[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:41.215002 systemd-logind[1591]: New session 16 of user core. Nov 6 00:32:41.219145 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:32:41.406880 sshd[5542]: Connection closed by 10.0.0.1 port 39108 Nov 6 00:32:41.407283 sshd-session[5539]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:41.413074 systemd[1]: sshd@15-10.0.0.111:22-10.0.0.1:39108.service: Deactivated successfully. Nov 6 00:32:41.415553 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:32:41.417723 systemd-logind[1591]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:32:41.419863 systemd-logind[1591]: Removed session 16. Nov 6 00:32:41.441897 containerd[1616]: time="2025-11-06T00:32:41.441816619Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:41.548628 containerd[1616]: time="2025-11-06T00:32:41.548359750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:32:41.548628 containerd[1616]: time="2025-11-06T00:32:41.548433128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:32:41.548827 kubelet[2801]: E1106 00:32:41.548775 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:32:41.549250 kubelet[2801]: E1106 00:32:41.548847 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:32:41.549250 kubelet[2801]: E1106 00:32:41.549042 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c9e2f274426d4fdcb37983441f1257fa,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25tgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786c755bc5-fhq72_calico-system(d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:41.551412 containerd[1616]: time="2025-11-06T00:32:41.551364068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:32:41.948213 containerd[1616]: time="2025-11-06T00:32:41.948156805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:41.949895 containerd[1616]: time="2025-11-06T00:32:41.949841720Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:32:41.949996 containerd[1616]: time="2025-11-06T00:32:41.949913325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:32:41.950143 kubelet[2801]: E1106 00:32:41.950088 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:32:41.950208 kubelet[2801]: E1106 00:32:41.950161 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:32:41.950360 kubelet[2801]: E1106 00:32:41.950311 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25tgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786c755bc5-fhq72_calico-system(d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:41.952308 kubelet[2801]: E1106 00:32:41.952254 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786c755bc5-fhq72" podUID="d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397" Nov 6 00:32:42.952399 containerd[1616]: time="2025-11-06T00:32:42.952333128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:32:43.340286 containerd[1616]: time="2025-11-06T00:32:43.340093040Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:43.343095 containerd[1616]: time="2025-11-06T00:32:43.343033055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:32:43.343095 containerd[1616]: time="2025-11-06T00:32:43.343085014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:32:43.343359 kubelet[2801]: E1106 00:32:43.343299 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:43.343755 kubelet[2801]: E1106 00:32:43.343363 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:43.343755 kubelet[2801]: E1106 00:32:43.343543 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xgbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb775b858-snbnl_calico-apiserver(4977dfeb-b401-43e8-996c-8b0f6fd603a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:43.344746 kubelet[2801]: E1106 00:32:43.344691 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" podUID="4977dfeb-b401-43e8-996c-8b0f6fd603a7" Nov 6 00:32:44.953293 containerd[1616]: time="2025-11-06T00:32:44.953244441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:32:45.335035 containerd[1616]: time="2025-11-06T00:32:45.334839150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:45.343251 containerd[1616]: time="2025-11-06T00:32:45.343168436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:32:45.343388 containerd[1616]: time="2025-11-06T00:32:45.343271130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:32:45.343569 kubelet[2801]: E1106 00:32:45.343503 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:45.344089 kubelet[2801]: E1106 00:32:45.343581 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:45.344089 kubelet[2801]: E1106 00:32:45.343768 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qksrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb775b858-p9q9w_calico-apiserver(23a4e2d6-7e35-4d28-a47f-d87913358f1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:45.345031 kubelet[2801]: E1106 00:32:45.344994 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" podUID="23a4e2d6-7e35-4d28-a47f-d87913358f1f" Nov 6 00:32:45.952775 containerd[1616]: time="2025-11-06T00:32:45.952720462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:32:46.416545 containerd[1616]: time="2025-11-06T00:32:46.416370328Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:46.421560 systemd[1]: Started sshd@16-10.0.0.111:22-10.0.0.1:39124.service - OpenSSH per-connection server daemon (10.0.0.1:39124). Nov 6 00:32:46.463886 containerd[1616]: time="2025-11-06T00:32:46.463808050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:32:46.463886 containerd[1616]: time="2025-11-06T00:32:46.463852074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:32:46.464241 kubelet[2801]: E1106 00:32:46.464171 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:32:46.464241 kubelet[2801]: E1106 00:32:46.464240 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:32:46.464694 kubelet[2801]: E1106 00:32:46.464413 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69hwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-v9zqm_calico-system(e8483869-a3c9-4d7b-858a-1505af0fb5d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:46.465668 kubelet[2801]: E1106 00:32:46.465621 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v9zqm" podUID="e8483869-a3c9-4d7b-858a-1505af0fb5d9" Nov 6 00:32:46.480693 sshd[5566]: Accepted publickey for core from 10.0.0.1 port 39124 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:32:46.482664 sshd-session[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:46.487636 systemd-logind[1591]: New session 17 of user core. Nov 6 00:32:46.498087 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:32:47.028698 sshd[5569]: Connection closed by 10.0.0.1 port 39124 Nov 6 00:32:47.029021 sshd-session[5566]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:47.034688 systemd[1]: sshd@16-10.0.0.111:22-10.0.0.1:39124.service: Deactivated successfully. Nov 6 00:32:47.037154 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:32:47.038211 systemd-logind[1591]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:32:47.039384 systemd-logind[1591]: Removed session 17. Nov 6 00:32:47.952891 containerd[1616]: time="2025-11-06T00:32:47.952721089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:32:48.343041 containerd[1616]: time="2025-11-06T00:32:48.342875042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:48.390605 containerd[1616]: time="2025-11-06T00:32:48.390504772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:32:48.390605 containerd[1616]: time="2025-11-06T00:32:48.390518929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:32:48.390954 kubelet[2801]: E1106 00:32:48.390866 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:32:48.391409 kubelet[2801]: E1106 00:32:48.390966 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:32:48.391409 kubelet[2801]: E1106 00:32:48.391119 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bxgzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xwlzm_calico-system(3e3d7027-cc01-4677-b498-d2aaae1cd6f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:48.393886 containerd[1616]: time="2025-11-06T00:32:48.393848768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:32:48.877807 containerd[1616]: time="2025-11-06T00:32:48.877731255Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:49.012504 containerd[1616]: time="2025-11-06T00:32:49.012347460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:32:49.012504 containerd[1616]: time="2025-11-06T00:32:49.012446116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:32:49.013418 kubelet[2801]: E1106 00:32:49.012735 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:32:49.013418 kubelet[2801]: E1106 00:32:49.012796 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:32:49.013418 kubelet[2801]: E1106 00:32:49.012918 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bxgzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xwlzm_calico-system(3e3d7027-cc01-4677-b498-d2aaae1cd6f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:49.014142 kubelet[2801]: E1106 00:32:49.014102 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:32:49.953083 containerd[1616]: time="2025-11-06T00:32:49.953008659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:32:50.416409 containerd[1616]: time="2025-11-06T00:32:50.416241593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:50.441768 containerd[1616]: time="2025-11-06T00:32:50.441698117Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:32:50.441768 containerd[1616]: time="2025-11-06T00:32:50.441763931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:32:50.442065 kubelet[2801]: E1106 00:32:50.442010 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:32:50.442532 kubelet[2801]: E1106 00:32:50.442080 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:32:50.442532 kubelet[2801]: E1106 00:32:50.442267 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtjpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-589569c468-rpmp8_calico-system(52f5f93c-5f24-4f92-88a3-401da8e7e300): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:50.443526 kubelet[2801]: E1106 00:32:50.443458 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589569c468-rpmp8" podUID="52f5f93c-5f24-4f92-88a3-401da8e7e300" Nov 6 00:32:50.953380 containerd[1616]: time="2025-11-06T00:32:50.953133644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:32:51.301454 containerd[1616]: time="2025-11-06T00:32:51.301270765Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:32:51.303169 containerd[1616]: time="2025-11-06T00:32:51.303108826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:32:51.303253 containerd[1616]: time="2025-11-06T00:32:51.303196522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:32:51.303476 kubelet[2801]: E1106 00:32:51.303405 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:51.303585 kubelet[2801]: E1106 00:32:51.303476 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:32:51.303675 kubelet[2801]: E1106 00:32:51.303620 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdhnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6977b59f7b-5thqk_calico-apiserver(0228b1a2-410c-40ab-86ee-d344f8e34170): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:32:51.304829 kubelet[2801]: E1106 00:32:51.304771 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" podUID="0228b1a2-410c-40ab-86ee-d344f8e34170" Nov 6 00:32:52.042974 systemd[1]: Started sshd@17-10.0.0.111:22-10.0.0.1:46786.service - OpenSSH per-connection server daemon (10.0.0.1:46786). Nov 6 00:32:52.101699 sshd[5586]: Accepted publickey for core from 10.0.0.1 port 46786 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:32:52.103432 sshd-session[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:52.111662 systemd-logind[1591]: New session 18 of user core. Nov 6 00:32:52.120240 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:32:52.247224 sshd[5589]: Connection closed by 10.0.0.1 port 46786 Nov 6 00:32:52.247573 sshd-session[5586]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:52.252990 systemd[1]: sshd@17-10.0.0.111:22-10.0.0.1:46786.service: Deactivated successfully. Nov 6 00:32:52.255755 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:32:52.256681 systemd-logind[1591]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:32:52.259100 systemd-logind[1591]: Removed session 18. Nov 6 00:32:56.954565 kubelet[2801]: E1106 00:32:56.954452 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786c755bc5-fhq72" podUID="d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397" Nov 6 00:32:57.259867 systemd[1]: Started sshd@18-10.0.0.111:22-10.0.0.1:46800.service - OpenSSH per-connection server daemon (10.0.0.1:46800). Nov 6 00:32:57.322033 sshd[5604]: Accepted publickey for core from 10.0.0.1 port 46800 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:32:57.324036 sshd-session[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:32:57.329282 systemd-logind[1591]: New session 19 of user core. Nov 6 00:32:57.340133 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:32:57.516460 sshd[5607]: Connection closed by 10.0.0.1 port 46800 Nov 6 00:32:57.516812 sshd-session[5604]: pam_unix(sshd:session): session closed for user core Nov 6 00:32:57.523588 systemd[1]: sshd@18-10.0.0.111:22-10.0.0.1:46800.service: Deactivated successfully. Nov 6 00:32:57.526791 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:32:57.527838 systemd-logind[1591]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:32:57.529313 systemd-logind[1591]: Removed session 19. Nov 6 00:32:57.952782 kubelet[2801]: E1106 00:32:57.952708 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" podUID="4977dfeb-b401-43e8-996c-8b0f6fd603a7" Nov 6 00:32:58.871225 containerd[1616]: time="2025-11-06T00:32:58.871150636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f74d929ed53749c7d83123cb12c3bd58dafcd96f2b0deef6eb7b25ac9c2c82c8\" id:\"85be999f84604a3f8b1b587bd72a5da905cf5e8b94601367539e7b5785c9783e\" pid:5632 exited_at:{seconds:1762389178 nanos:870765810}" Nov 6 00:32:58.952608 kubelet[2801]: E1106 00:32:58.952552 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" podUID="23a4e2d6-7e35-4d28-a47f-d87913358f1f" Nov 6 00:32:59.953112 kubelet[2801]: E1106 00:32:59.953062 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:33:01.952915 kubelet[2801]: E1106 00:33:01.952865 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v9zqm" podUID="e8483869-a3c9-4d7b-858a-1505af0fb5d9" Nov 6 00:33:02.530299 systemd[1]: Started sshd@19-10.0.0.111:22-10.0.0.1:56308.service - OpenSSH per-connection server daemon (10.0.0.1:56308). Nov 6 00:33:02.591965 sshd[5646]: Accepted publickey for core from 10.0.0.1 port 56308 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:33:02.594029 sshd-session[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:33:02.599566 systemd-logind[1591]: New session 20 of user core. Nov 6 00:33:02.611105 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:33:03.037888 sshd[5649]: Connection closed by 10.0.0.1 port 56308 Nov 6 00:33:03.038265 sshd-session[5646]: pam_unix(sshd:session): session closed for user core Nov 6 00:33:03.043636 systemd[1]: sshd@19-10.0.0.111:22-10.0.0.1:56308.service: Deactivated successfully. Nov 6 00:33:03.046183 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:33:03.047039 systemd-logind[1591]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:33:03.048419 systemd-logind[1591]: Removed session 20. Nov 6 00:33:04.951783 kubelet[2801]: E1106 00:33:04.951585 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:33:04.953034 kubelet[2801]: E1106 00:33:04.952508 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589569c468-rpmp8" podUID="52f5f93c-5f24-4f92-88a3-401da8e7e300" Nov 6 00:33:05.953002 kubelet[2801]: E1106 00:33:05.952955 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" podUID="0228b1a2-410c-40ab-86ee-d344f8e34170" Nov 6 00:33:08.050840 systemd[1]: Started sshd@20-10.0.0.111:22-10.0.0.1:56314.service - OpenSSH per-connection server daemon (10.0.0.1:56314). Nov 6 00:33:08.105577 sshd[5662]: Accepted publickey for core from 10.0.0.1 port 56314 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:33:08.107180 sshd-session[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:33:08.112005 systemd-logind[1591]: New session 21 of user core. Nov 6 00:33:08.128286 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:33:08.425611 sshd[5665]: Connection closed by 10.0.0.1 port 56314 Nov 6 00:33:08.425961 sshd-session[5662]: pam_unix(sshd:session): session closed for user core Nov 6 00:33:08.438240 systemd[1]: sshd@20-10.0.0.111:22-10.0.0.1:56314.service: Deactivated successfully. Nov 6 00:33:08.440414 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:33:08.441308 systemd-logind[1591]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:33:08.444260 systemd[1]: Started sshd@21-10.0.0.111:22-10.0.0.1:56326.service - OpenSSH per-connection server daemon (10.0.0.1:56326). Nov 6 00:33:08.444847 systemd-logind[1591]: Removed session 21. Nov 6 00:33:08.508955 sshd[5678]: Accepted publickey for core from 10.0.0.1 port 56326 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:33:08.510467 sshd-session[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:33:08.515782 systemd-logind[1591]: New session 22 of user core. Nov 6 00:33:08.529140 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 00:33:08.954457 kubelet[2801]: E1106 00:33:08.954398 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786c755bc5-fhq72" podUID="d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397" Nov 6 00:33:09.908394 sshd[5681]: Connection closed by 10.0.0.1 port 56326 Nov 6 00:33:09.908880 sshd-session[5678]: pam_unix(sshd:session): session closed for user core Nov 6 00:33:09.920225 systemd[1]: sshd@21-10.0.0.111:22-10.0.0.1:56326.service: Deactivated successfully. Nov 6 00:33:09.922911 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 00:33:09.923876 systemd-logind[1591]: Session 22 logged out. Waiting for processes to exit. Nov 6 00:33:09.927856 systemd[1]: Started sshd@22-10.0.0.111:22-10.0.0.1:56332.service - OpenSSH per-connection server daemon (10.0.0.1:56332). Nov 6 00:33:09.928661 systemd-logind[1591]: Removed session 22. Nov 6 00:33:10.004504 sshd[5693]: Accepted publickey for core from 10.0.0.1 port 56332 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:33:10.005906 sshd-session[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:33:10.011061 systemd-logind[1591]: New session 23 of user core. Nov 6 00:33:10.019122 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 00:33:11.952371 kubelet[2801]: E1106 00:33:11.952280 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" podUID="4977dfeb-b401-43e8-996c-8b0f6fd603a7" Nov 6 00:33:12.797456 sshd[5696]: Connection closed by 10.0.0.1 port 56332 Nov 6 00:33:12.797833 sshd-session[5693]: pam_unix(sshd:session): session closed for user core Nov 6 00:33:12.815662 systemd[1]: sshd@22-10.0.0.111:22-10.0.0.1:56332.service: Deactivated successfully. Nov 6 00:33:12.820200 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 00:33:12.821746 systemd-logind[1591]: Session 23 logged out. Waiting for processes to exit. Nov 6 00:33:12.826066 systemd[1]: Started sshd@23-10.0.0.111:22-10.0.0.1:38242.service - OpenSSH per-connection server daemon (10.0.0.1:38242). Nov 6 00:33:12.828816 systemd-logind[1591]: Removed session 23. Nov 6 00:33:12.886783 sshd[5715]: Accepted publickey for core from 10.0.0.1 port 38242 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:33:12.888553 sshd-session[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:33:12.893996 systemd-logind[1591]: New session 24 of user core. Nov 6 00:33:12.906234 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 00:33:12.953269 kubelet[2801]: E1106 00:33:12.953185 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:33:13.372722 sshd[5718]: Connection closed by 10.0.0.1 port 38242 Nov 6 00:33:13.373179 sshd-session[5715]: pam_unix(sshd:session): session closed for user core Nov 6 00:33:13.385022 systemd[1]: sshd@23-10.0.0.111:22-10.0.0.1:38242.service: Deactivated successfully. Nov 6 00:33:13.387555 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 00:33:13.388494 systemd-logind[1591]: Session 24 logged out. Waiting for processes to exit. Nov 6 00:33:13.392524 systemd[1]: Started sshd@24-10.0.0.111:22-10.0.0.1:38256.service - OpenSSH per-connection server daemon (10.0.0.1:38256). Nov 6 00:33:13.393450 systemd-logind[1591]: Removed session 24. Nov 6 00:33:13.451966 sshd[5730]: Accepted publickey for core from 10.0.0.1 port 38256 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:33:13.453759 sshd-session[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:33:13.459412 systemd-logind[1591]: New session 25 of user core. Nov 6 00:33:13.471218 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 00:33:13.609837 sshd[5733]: Connection closed by 10.0.0.1 port 38256 Nov 6 00:33:13.611165 sshd-session[5730]: pam_unix(sshd:session): session closed for user core Nov 6 00:33:13.617190 systemd[1]: sshd@24-10.0.0.111:22-10.0.0.1:38256.service: Deactivated successfully. Nov 6 00:33:13.620144 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 00:33:13.621234 systemd-logind[1591]: Session 25 logged out. Waiting for processes to exit. Nov 6 00:33:13.623583 systemd-logind[1591]: Removed session 25. Nov 6 00:33:13.956282 kubelet[2801]: E1106 00:33:13.955545 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" podUID="23a4e2d6-7e35-4d28-a47f-d87913358f1f" Nov 6 00:33:13.956282 kubelet[2801]: E1106 00:33:13.955553 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v9zqm" podUID="e8483869-a3c9-4d7b-858a-1505af0fb5d9" Nov 6 00:33:16.953743 kubelet[2801]: E1106 00:33:16.953558 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589569c468-rpmp8" podUID="52f5f93c-5f24-4f92-88a3-401da8e7e300" Nov 6 00:33:17.952231 kubelet[2801]: E1106 00:33:17.952182 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:33:18.630568 systemd[1]: Started sshd@25-10.0.0.111:22-10.0.0.1:38270.service - OpenSSH per-connection server daemon (10.0.0.1:38270). Nov 6 00:33:18.716904 sshd[5747]: Accepted publickey for core from 10.0.0.1 port 38270 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:33:18.718617 sshd-session[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:33:18.723089 systemd-logind[1591]: New session 26 of user core. Nov 6 00:33:18.730199 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 00:33:18.844257 sshd[5750]: Connection closed by 10.0.0.1 port 38270 Nov 6 00:33:18.844561 sshd-session[5747]: pam_unix(sshd:session): session closed for user core Nov 6 00:33:18.848887 systemd[1]: sshd@25-10.0.0.111:22-10.0.0.1:38270.service: Deactivated successfully. Nov 6 00:33:18.851401 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 00:33:18.852371 systemd-logind[1591]: Session 26 logged out. Waiting for processes to exit. Nov 6 00:33:18.853623 systemd-logind[1591]: Removed session 26. Nov 6 00:33:18.955161 kubelet[2801]: E1106 00:33:18.955105 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" podUID="0228b1a2-410c-40ab-86ee-d344f8e34170" Nov 6 00:33:21.952440 containerd[1616]: time="2025-11-06T00:33:21.952381984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:33:22.345436 containerd[1616]: time="2025-11-06T00:33:22.345261518Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:33:22.346704 containerd[1616]: time="2025-11-06T00:33:22.346642733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:33:22.346874 containerd[1616]: time="2025-11-06T00:33:22.346725319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:33:22.346978 kubelet[2801]: E1106 00:33:22.346894 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:33:22.347378 kubelet[2801]: E1106 00:33:22.346998 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:33:22.347378 kubelet[2801]: E1106 00:33:22.347145 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c9e2f274426d4fdcb37983441f1257fa,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25tgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786c755bc5-fhq72_calico-system(d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:33:22.349428 containerd[1616]: time="2025-11-06T00:33:22.349356199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:33:22.799694 containerd[1616]: time="2025-11-06T00:33:22.799637247Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:33:22.801319 containerd[1616]: time="2025-11-06T00:33:22.801289032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:33:22.801460 containerd[1616]: time="2025-11-06T00:33:22.801401915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:33:22.801562 kubelet[2801]: E1106 00:33:22.801515 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:33:22.801613 kubelet[2801]: E1106 00:33:22.801573 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:33:22.801729 kubelet[2801]: E1106 00:33:22.801690 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25tgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-786c755bc5-fhq72_calico-system(d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:33:22.803093 kubelet[2801]: E1106 00:33:22.803039 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786c755bc5-fhq72" podUID="d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397" Nov 6 00:33:23.866837 systemd[1]: Started sshd@26-10.0.0.111:22-10.0.0.1:38754.service - OpenSSH per-connection server daemon (10.0.0.1:38754). Nov 6 00:33:23.925994 sshd[5774]: Accepted publickey for core from 10.0.0.1 port 38754 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:33:23.927590 sshd-session[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:33:23.934022 systemd-logind[1591]: New session 27 of user core. Nov 6 00:33:23.943100 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 6 00:33:23.952774 kubelet[2801]: E1106 00:33:23.952708 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:33:24.065332 sshd[5779]: Connection closed by 10.0.0.1 port 38754 Nov 6 00:33:24.066882 sshd-session[5774]: pam_unix(sshd:session): session closed for user core Nov 6 00:33:24.078663 systemd[1]: sshd@26-10.0.0.111:22-10.0.0.1:38754.service: Deactivated successfully. Nov 6 00:33:24.081330 systemd[1]: session-27.scope: Deactivated successfully. Nov 6 00:33:24.082287 systemd-logind[1591]: Session 27 logged out. Waiting for processes to exit. Nov 6 00:33:24.083956 systemd-logind[1591]: Removed session 27. Nov 6 00:33:24.953518 containerd[1616]: time="2025-11-06T00:33:24.953422129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:33:25.267408 containerd[1616]: time="2025-11-06T00:33:25.267197428Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:33:25.268899 containerd[1616]: time="2025-11-06T00:33:25.268768620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:33:25.268899 containerd[1616]: time="2025-11-06T00:33:25.268867767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:33:25.269245 kubelet[2801]: E1106 00:33:25.269064 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:33:25.269245 kubelet[2801]: E1106 00:33:25.269132 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:33:25.269778 kubelet[2801]: E1106 00:33:25.269291 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xgbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb775b858-snbnl_calico-apiserver(4977dfeb-b401-43e8-996c-8b0f6fd603a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:33:25.270549 kubelet[2801]: E1106 00:33:25.270506 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" podUID="4977dfeb-b401-43e8-996c-8b0f6fd603a7" Nov 6 00:33:28.794608 containerd[1616]: time="2025-11-06T00:33:28.794560965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f74d929ed53749c7d83123cb12c3bd58dafcd96f2b0deef6eb7b25ac9c2c82c8\" id:\"35e1837b74ac71a4e3fad705e8baf8060e338e1b478948d2c66672d6a8435de3\" pid:5805 exited_at:{seconds:1762389208 nanos:794218530}" Nov 6 00:33:28.952798 containerd[1616]: time="2025-11-06T00:33:28.952749149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:33:29.079563 systemd[1]: Started sshd@27-10.0.0.111:22-10.0.0.1:38762.service - OpenSSH per-connection server daemon (10.0.0.1:38762). Nov 6 00:33:29.220461 sshd[5821]: Accepted publickey for core from 10.0.0.1 port 38762 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:33:29.222544 sshd-session[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:33:29.233000 systemd-logind[1591]: New session 28 of user core. Nov 6 00:33:29.244141 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 6 00:33:29.399131 sshd[5824]: Connection closed by 10.0.0.1 port 38762 Nov 6 00:33:29.400303 sshd-session[5821]: pam_unix(sshd:session): session closed for user core Nov 6 00:33:29.410315 systemd[1]: sshd@27-10.0.0.111:22-10.0.0.1:38762.service: Deactivated successfully. Nov 6 00:33:29.412625 systemd[1]: session-28.scope: Deactivated successfully. Nov 6 00:33:29.413662 systemd-logind[1591]: Session 28 logged out. Waiting for processes to exit. Nov 6 00:33:29.414987 systemd-logind[1591]: Removed session 28. Nov 6 00:33:29.427317 containerd[1616]: time="2025-11-06T00:33:29.427271184Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:33:29.490179 containerd[1616]: time="2025-11-06T00:33:29.490076690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:33:29.490179 containerd[1616]: time="2025-11-06T00:33:29.490128467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:33:29.490465 kubelet[2801]: E1106 00:33:29.490408 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:33:29.490911 kubelet[2801]: E1106 00:33:29.490483 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:33:29.490911 kubelet[2801]: E1106 00:33:29.490762 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qksrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb775b858-p9q9w_calico-apiserver(23a4e2d6-7e35-4d28-a47f-d87913358f1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:33:29.491103 containerd[1616]: time="2025-11-06T00:33:29.490852853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:33:29.492360 kubelet[2801]: E1106 00:33:29.492292 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-p9q9w" podUID="23a4e2d6-7e35-4d28-a47f-d87913358f1f" Nov 6 00:33:29.917422 containerd[1616]: time="2025-11-06T00:33:29.917361668Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:33:29.943166 containerd[1616]: time="2025-11-06T00:33:29.943061589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:33:29.943166 containerd[1616]: time="2025-11-06T00:33:29.943111854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:33:29.943522 kubelet[2801]: E1106 00:33:29.943446 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:33:29.943595 kubelet[2801]: E1106 00:33:29.943527 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:33:29.943760 kubelet[2801]: E1106 00:33:29.943693 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69hwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-v9zqm_calico-system(e8483869-a3c9-4d7b-858a-1505af0fb5d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:33:29.945027 kubelet[2801]: E1106 00:33:29.944857 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v9zqm" podUID="e8483869-a3c9-4d7b-858a-1505af0fb5d9" Nov 6 00:33:29.952025 kubelet[2801]: E1106 00:33:29.951917 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:33:30.952923 kubelet[2801]: E1106 00:33:30.951773 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:33:30.952923 kubelet[2801]: E1106 00:33:30.951992 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:33:30.952923 kubelet[2801]: E1106 00:33:30.952587 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6977b59f7b-5thqk" podUID="0228b1a2-410c-40ab-86ee-d344f8e34170" Nov 6 00:33:30.953487 containerd[1616]: time="2025-11-06T00:33:30.952686808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:33:31.340922 containerd[1616]: time="2025-11-06T00:33:31.340741157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:33:31.545283 containerd[1616]: time="2025-11-06T00:33:31.545163582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:33:31.545510 containerd[1616]: time="2025-11-06T00:33:31.545220699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:33:31.545547 kubelet[2801]: E1106 00:33:31.545486 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:33:31.545616 kubelet[2801]: E1106 00:33:31.545556 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:33:31.545813 kubelet[2801]: E1106 00:33:31.545732 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtjpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-589569c468-rpmp8_calico-system(52f5f93c-5f24-4f92-88a3-401da8e7e300): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:33:31.547077 kubelet[2801]: E1106 00:33:31.547002 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589569c468-rpmp8" podUID="52f5f93c-5f24-4f92-88a3-401da8e7e300" Nov 6 00:33:34.418440 systemd[1]: Started sshd@28-10.0.0.111:22-10.0.0.1:40030.service - OpenSSH per-connection server daemon (10.0.0.1:40030). Nov 6 00:33:34.543200 sshd[5858]: Accepted publickey for core from 10.0.0.1 port 40030 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:33:34.546138 sshd-session[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:33:34.553022 systemd-logind[1591]: New session 29 of user core. Nov 6 00:33:34.562293 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 6 00:33:34.715965 sshd[5861]: Connection closed by 10.0.0.1 port 40030 Nov 6 00:33:34.714901 sshd-session[5858]: pam_unix(sshd:session): session closed for user core Nov 6 00:33:34.723457 systemd-logind[1591]: Session 29 logged out. Waiting for processes to exit. Nov 6 00:33:34.724274 systemd[1]: sshd@28-10.0.0.111:22-10.0.0.1:40030.service: Deactivated successfully. Nov 6 00:33:34.726954 systemd[1]: session-29.scope: Deactivated successfully. Nov 6 00:33:34.729373 systemd-logind[1591]: Removed session 29. Nov 6 00:33:35.953271 containerd[1616]: time="2025-11-06T00:33:35.953225170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:33:36.460532 containerd[1616]: time="2025-11-06T00:33:36.460455288Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:33:36.643131 containerd[1616]: time="2025-11-06T00:33:36.642877829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:33:36.643131 containerd[1616]: time="2025-11-06T00:33:36.642964877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:33:36.643338 kubelet[2801]: E1106 00:33:36.643221 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:33:36.643338 kubelet[2801]: E1106 00:33:36.643276 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:33:36.643788 kubelet[2801]: E1106 00:33:36.643464 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bxgzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xwlzm_calico-system(3e3d7027-cc01-4677-b498-d2aaae1cd6f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:33:36.646446 containerd[1616]: time="2025-11-06T00:33:36.646399695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:33:36.955489 kubelet[2801]: E1106 00:33:36.955421 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-786c755bc5-fhq72" podUID="d8a1eb22-1c9e-4de5-b2e4-2283cdcf5397" Nov 6 00:33:37.102539 containerd[1616]: time="2025-11-06T00:33:37.102437353Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:33:37.104533 containerd[1616]: time="2025-11-06T00:33:37.104463130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:33:37.104890 containerd[1616]: time="2025-11-06T00:33:37.104605654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:33:37.105076 kubelet[2801]: E1106 00:33:37.104882 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:33:37.105206 kubelet[2801]: E1106 00:33:37.105093 2801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:33:37.105907 kubelet[2801]: E1106 00:33:37.105620 2801 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bxgzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xwlzm_calico-system(3e3d7027-cc01-4677-b498-d2aaae1cd6f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:33:37.107156 kubelet[2801]: E1106 00:33:37.107082 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwlzm" podUID="3e3d7027-cc01-4677-b498-d2aaae1cd6f2" Nov 6 00:33:39.728765 systemd[1]: Started sshd@29-10.0.0.111:22-10.0.0.1:40046.service - OpenSSH per-connection server daemon (10.0.0.1:40046). Nov 6 00:33:39.804414 sshd[5875]: Accepted publickey for core from 10.0.0.1 port 40046 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:33:39.806691 sshd-session[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:33:39.812802 systemd-logind[1591]: New session 30 of user core. Nov 6 00:33:39.822216 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 6 00:33:39.952851 kubelet[2801]: E1106 00:33:39.952787 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb775b858-snbnl" podUID="4977dfeb-b401-43e8-996c-8b0f6fd603a7" Nov 6 00:33:39.962391 sshd[5878]: Connection closed by 10.0.0.1 port 40046 Nov 6 00:33:39.962891 sshd-session[5875]: pam_unix(sshd:session): session closed for user core Nov 6 00:33:39.970814 systemd[1]: sshd@29-10.0.0.111:22-10.0.0.1:40046.service: Deactivated successfully. Nov 6 00:33:39.973563 systemd[1]: session-30.scope: Deactivated successfully. Nov 6 00:33:39.974698 systemd-logind[1591]: Session 30 logged out. Waiting for processes to exit. Nov 6 00:33:39.976819 systemd-logind[1591]: Removed session 30. Nov 6 00:33:40.953154 kubelet[2801]: E1106 00:33:40.953085 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"