Jan 23 00:58:36.223222 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 00:58:36.223244 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:58:36.223256 kernel: BIOS-provided physical RAM map: Jan 23 00:58:36.223262 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 23 00:58:36.223268 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 23 00:58:36.223274 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 00:58:36.223280 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 23 00:58:36.223286 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 23 00:58:36.223452 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 00:58:36.223462 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 00:58:36.223469 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 00:58:36.223479 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 00:58:36.223485 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 00:58:36.223491 kernel: NX (Execute Disable) protection: active Jan 23 00:58:36.223498 kernel: APIC: Static calls initialized Jan 23 00:58:36.223505 kernel: SMBIOS 2.8 present. Jan 23 00:58:36.223581 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 23 00:58:36.223589 kernel: DMI: Memory slots populated: 1/1 Jan 23 00:58:36.223596 kernel: Hypervisor detected: KVM Jan 23 00:58:36.223602 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 00:58:36.223608 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 00:58:36.223614 kernel: kvm-clock: using sched offset of 20795700982 cycles Jan 23 00:58:36.223621 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 00:58:36.223628 kernel: tsc: Detected 2445.424 MHz processor Jan 23 00:58:36.223635 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 00:58:36.223642 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 00:58:36.223652 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 00:58:36.223658 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 00:58:36.223669 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 00:58:36.223679 kernel: Using GB pages for direct mapping Jan 23 00:58:36.223690 kernel: ACPI: Early table checksum verification disabled Jan 23 00:58:36.223700 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 23 00:58:36.223711 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:58:36.223718 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:58:36.223725 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:58:36.223735 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 23 00:58:36.223742 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:58:36.223749 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:58:36.223755 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:58:36.223891 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:58:36.223914 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 23 00:58:36.223931 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 23 00:58:36.223940 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 23 00:58:36.223950 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 23 00:58:36.223959 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 23 00:58:36.223970 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 23 00:58:36.223983 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 23 00:58:36.223994 kernel: No NUMA configuration found Jan 23 00:58:36.224004 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 23 00:58:36.224018 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 23 00:58:36.224030 kernel: Zone ranges: Jan 23 00:58:36.224041 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 00:58:36.224051 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 23 00:58:36.224060 kernel: Normal empty Jan 23 00:58:36.224072 kernel: Device empty Jan 23 00:58:36.224083 kernel: Movable zone start for each node Jan 23 00:58:36.224092 kernel: Early memory node ranges Jan 23 00:58:36.224102 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 00:58:36.224118 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 23 00:58:36.224128 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 23 00:58:36.224137 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 00:58:36.224149 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 00:58:36.224233 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 23 00:58:36.224246 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 00:58:36.224257 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 00:58:36.224268 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 00:58:36.224279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 00:58:36.224448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 00:58:36.224464 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 00:58:36.224474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 00:58:36.224483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 00:58:36.224493 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 00:58:36.224505 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 00:58:36.224518 kernel: TSC deadline timer available Jan 23 00:58:36.224526 kernel: CPU topo: Max. logical packages: 1 Jan 23 00:58:36.224533 kernel: CPU topo: Max. logical dies: 1 Jan 23 00:58:36.224544 kernel: CPU topo: Max. dies per package: 1 Jan 23 00:58:36.224551 kernel: CPU topo: Max. threads per core: 1 Jan 23 00:58:36.224558 kernel: CPU topo: Num. cores per package: 4 Jan 23 00:58:36.224566 kernel: CPU topo: Num. threads per package: 4 Jan 23 00:58:36.224578 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 00:58:36.224589 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 00:58:36.224597 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 00:58:36.224604 kernel: kvm-guest: setup PV sched yield Jan 23 00:58:36.224610 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 00:58:36.224617 kernel: Booting paravirtualized kernel on KVM Jan 23 00:58:36.224628 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 00:58:36.224635 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 00:58:36.224642 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 00:58:36.224648 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 00:58:36.224655 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 00:58:36.224662 kernel: kvm-guest: PV spinlocks enabled Jan 23 00:58:36.224668 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 00:58:36.224676 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:58:36.224686 kernel: random: crng init done Jan 23 00:58:36.224692 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 00:58:36.224699 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 00:58:36.224706 kernel: Fallback order for Node 0: 0 Jan 23 00:58:36.224713 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 23 00:58:36.224719 kernel: Policy zone: DMA32 Jan 23 00:58:36.224726 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 00:58:36.224733 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 00:58:36.224739 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 00:58:36.224749 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 00:58:36.224755 kernel: Dynamic Preempt: voluntary Jan 23 00:58:36.224941 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 00:58:36.224952 kernel: rcu: RCU event tracing is enabled. Jan 23 00:58:36.224959 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 00:58:36.224966 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 00:58:36.225037 kernel: Rude variant of Tasks RCU enabled. Jan 23 00:58:36.225051 kernel: Tracing variant of Tasks RCU enabled. Jan 23 00:58:36.225063 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 00:58:36.225074 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 00:58:36.225081 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 00:58:36.225088 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 00:58:36.225095 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 00:58:36.225102 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 00:58:36.225109 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 00:58:36.225130 kernel: Console: colour VGA+ 80x25 Jan 23 00:58:36.225146 kernel: printk: legacy console [ttyS0] enabled Jan 23 00:58:36.225156 kernel: ACPI: Core revision 20240827 Jan 23 00:58:36.225165 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 00:58:36.225175 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 00:58:36.225187 kernel: x2apic enabled Jan 23 00:58:36.225204 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 00:58:36.225290 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 00:58:36.225300 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 00:58:36.225307 kernel: kvm-guest: setup PV IPIs Jan 23 00:58:36.225314 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 00:58:36.225326 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 00:58:36.225413 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 23 00:58:36.225427 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 00:58:36.225440 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 00:58:36.225451 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 00:58:36.225458 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 00:58:36.225465 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 00:58:36.225472 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 00:58:36.225483 kernel: Speculative Store Bypass: Vulnerable Jan 23 00:58:36.225490 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 00:58:36.225498 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 00:58:36.225505 kernel: active return thunk: srso_alias_return_thunk Jan 23 00:58:36.225512 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 00:58:36.225519 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 00:58:36.225526 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 00:58:36.225533 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 00:58:36.225540 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 00:58:36.225550 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 00:58:36.225557 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 00:58:36.225564 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 00:58:36.225571 kernel: Freeing SMP alternatives memory: 32K Jan 23 00:58:36.225579 kernel: pid_max: default: 32768 minimum: 301 Jan 23 00:58:36.225591 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 00:58:36.225602 kernel: landlock: Up and running. Jan 23 00:58:36.225615 kernel: SELinux: Initializing. Jan 23 00:58:36.225626 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:58:36.225641 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:58:36.225718 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 00:58:36.225730 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 00:58:36.225740 kernel: signal: max sigframe size: 1776 Jan 23 00:58:36.225752 kernel: rcu: Hierarchical SRCU implementation. Jan 23 00:58:36.225907 kernel: rcu: Max phase no-delay instances is 400. Jan 23 00:58:36.225923 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 00:58:36.225994 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 00:58:36.226001 kernel: smp: Bringing up secondary CPUs ... Jan 23 00:58:36.226013 kernel: smpboot: x86: Booting SMP configuration: Jan 23 00:58:36.226020 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 00:58:36.226027 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 00:58:36.226034 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 23 00:58:36.226042 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 145096K reserved, 0K cma-reserved) Jan 23 00:58:36.226049 kernel: devtmpfs: initialized Jan 23 00:58:36.226056 kernel: x86/mm: Memory block size: 128MB Jan 23 00:58:36.226063 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 00:58:36.226070 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 00:58:36.226079 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 00:58:36.226086 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 00:58:36.226093 kernel: audit: initializing netlink subsys (disabled) Jan 23 00:58:36.226103 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 00:58:36.226115 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 00:58:36.226127 kernel: audit: type=2000 audit(1769129904.282:1): state=initialized audit_enabled=0 res=1 Jan 23 00:58:36.226139 kernel: cpuidle: using governor menu Jan 23 00:58:36.226152 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 00:58:36.226160 kernel: dca service started, version 1.12.1 Jan 23 00:58:36.226177 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 00:58:36.226184 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 00:58:36.226191 kernel: PCI: Using configuration type 1 for base access Jan 23 00:58:36.226198 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 00:58:36.226205 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 00:58:36.226215 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 00:58:36.226227 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 00:58:36.226239 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 00:58:36.226256 kernel: ACPI: Added _OSI(Module Device) Jan 23 00:58:36.226265 kernel: ACPI: Added _OSI(Processor Device) Jan 23 00:58:36.226272 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 00:58:36.227052 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 00:58:36.227067 kernel: ACPI: Interpreter enabled Jan 23 00:58:36.227075 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 00:58:36.227082 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 00:58:36.227089 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 00:58:36.227096 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 00:58:36.227103 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 00:58:36.227116 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 00:58:36.228255 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 00:58:36.228564 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 00:58:36.228719 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 00:58:36.228729 kernel: PCI host bridge to bus 0000:00 Jan 23 00:58:36.229278 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 00:58:36.229531 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 00:58:36.229701 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 00:58:36.230018 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 23 00:58:36.230214 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 00:58:36.230476 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 23 00:58:36.230670 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 00:58:36.231204 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 00:58:36.231558 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 00:58:36.231707 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 00:58:36.232220 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 00:58:36.232643 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 00:58:36.233009 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 00:58:36.233161 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 11718 usecs Jan 23 00:58:36.233943 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 00:58:36.234159 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 23 00:58:36.234308 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 00:58:36.234598 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 00:58:36.235506 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 00:58:36.235664 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 23 00:58:36.236019 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 00:58:36.236180 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 00:58:36.236618 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 00:58:36.237112 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 23 00:58:36.237311 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 23 00:58:36.237759 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 23 00:58:36.239681 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 00:58:36.246439 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 00:58:36.246670 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 00:58:36.246975 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 15625 usecs Jan 23 00:58:36.247425 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 00:58:36.247583 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 23 00:58:36.247753 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 23 00:58:36.248196 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 00:58:36.248444 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 00:58:36.248463 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 00:58:36.248472 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 00:58:36.248479 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 00:58:36.248491 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 00:58:36.248504 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 00:58:36.248517 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 00:58:36.248527 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 00:58:36.248537 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 00:58:36.248552 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 00:58:36.248564 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 00:58:36.248577 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 00:58:36.248589 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 00:58:36.248599 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 00:58:36.248609 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 00:58:36.248622 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 00:58:36.248634 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 00:58:36.248646 kernel: iommu: Default domain type: Translated Jan 23 00:58:36.248664 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 00:58:36.248676 kernel: PCI: Using ACPI for IRQ routing Jan 23 00:58:36.248686 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 00:58:36.248696 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 23 00:58:36.248709 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 23 00:58:36.249032 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 00:58:36.249224 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 00:58:36.249538 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 00:58:36.249556 kernel: vgaarb: loaded Jan 23 00:58:36.249575 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 00:58:36.249587 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 00:58:36.249597 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 00:58:36.249607 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 00:58:36.249621 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 00:58:36.249631 kernel: pnp: PnP ACPI init Jan 23 00:58:36.250231 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 00:58:36.250253 kernel: pnp: PnP ACPI: found 6 devices Jan 23 00:58:36.250274 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 00:58:36.250284 kernel: NET: Registered PF_INET protocol family Jan 23 00:58:36.250295 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 00:58:36.250305 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 00:58:36.250317 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 00:58:36.250414 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 00:58:36.250431 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 00:58:36.250444 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 00:58:36.250457 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:58:36.250472 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:58:36.250483 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 00:58:36.250494 kernel: NET: Registered PF_XDP protocol family Jan 23 00:58:36.250659 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 00:58:36.250932 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 00:58:36.251087 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 00:58:36.251236 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 23 00:58:36.251449 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 00:58:36.251583 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 23 00:58:36.251598 kernel: PCI: CLS 0 bytes, default 64 Jan 23 00:58:36.251606 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 00:58:36.251614 kernel: Initialise system trusted keyrings Jan 23 00:58:36.251621 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 00:58:36.251628 kernel: Key type asymmetric registered Jan 23 00:58:36.251635 kernel: Asymmetric key parser 'x509' registered Jan 23 00:58:36.251642 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 00:58:36.251650 kernel: io scheduler mq-deadline registered Jan 23 00:58:36.251657 kernel: io scheduler kyber registered Jan 23 00:58:36.251667 kernel: io scheduler bfq registered Jan 23 00:58:36.251674 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 00:58:36.251682 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 00:58:36.251689 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 00:58:36.251697 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 00:58:36.251705 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 00:58:36.251718 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 00:58:36.251733 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 00:58:36.251743 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 00:58:36.251758 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 00:58:36.252254 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 00:58:36.252271 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 00:58:36.252495 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 00:58:36.252634 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T00:58:34 UTC (1769129914) Jan 23 00:58:36.252951 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 00:58:36.252969 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 00:58:36.252988 kernel: NET: Registered PF_INET6 protocol family Jan 23 00:58:36.253002 kernel: Segment Routing with IPv6 Jan 23 00:58:36.253013 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 00:58:36.253022 kernel: NET: Registered PF_PACKET protocol family Jan 23 00:58:36.253033 kernel: Key type dns_resolver registered Jan 23 00:58:36.253044 kernel: IPI shorthand broadcast: enabled Jan 23 00:58:36.253058 kernel: sched_clock: Marking stable (9060066257, 974095601)->(10972177339, -938015481) Jan 23 00:58:36.253070 kernel: registered taskstats version 1 Jan 23 00:58:36.253080 kernel: Loading compiled-in X.509 certificates Jan 23 00:58:36.253090 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 00:58:36.253108 kernel: Demotion targets for Node 0: null Jan 23 00:58:36.253119 kernel: Key type .fscrypt registered Jan 23 00:58:36.253129 kernel: Key type fscrypt-provisioning registered Jan 23 00:58:36.253139 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 00:58:36.253151 kernel: ima: Allocated hash algorithm: sha1 Jan 23 00:58:36.253163 kernel: ima: No architecture policies found Jan 23 00:58:36.253176 kernel: clk: Disabling unused clocks Jan 23 00:58:36.253188 kernel: Warning: unable to open an initial console. Jan 23 00:58:36.253203 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 00:58:36.253214 kernel: Write protecting the kernel read-only data: 40960k Jan 23 00:58:36.253226 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 00:58:36.253237 kernel: Run /init as init process Jan 23 00:58:36.253244 kernel: with arguments: Jan 23 00:58:36.253252 kernel: /init Jan 23 00:58:36.253259 kernel: with environment: Jan 23 00:58:36.253266 kernel: HOME=/ Jan 23 00:58:36.253273 kernel: TERM=linux Jan 23 00:58:36.253285 systemd[1]: Successfully made /usr/ read-only. Jan 23 00:58:36.253295 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:58:36.253304 systemd[1]: Detected virtualization kvm. Jan 23 00:58:36.253311 systemd[1]: Detected architecture x86-64. Jan 23 00:58:36.253319 systemd[1]: Running in initrd. Jan 23 00:58:36.253326 systemd[1]: No hostname configured, using default hostname. Jan 23 00:58:36.253412 systemd[1]: Hostname set to . Jan 23 00:58:36.253424 systemd[1]: Initializing machine ID from VM UUID. Jan 23 00:58:36.253444 systemd[1]: Queued start job for default target initrd.target. Jan 23 00:58:36.253454 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:58:36.253462 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:58:36.253470 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 00:58:36.253478 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:58:36.253489 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 00:58:36.253498 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 00:58:36.253506 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 00:58:36.253514 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 00:58:36.253522 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:58:36.253530 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:58:36.253538 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:58:36.253548 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:58:36.253556 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:58:36.253564 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:58:36.253571 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:58:36.253579 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:58:36.253587 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 00:58:36.253595 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 00:58:36.253602 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:58:36.253610 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:58:36.253620 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:58:36.253628 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:58:36.253636 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 00:58:36.253644 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:58:36.253651 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 00:58:36.253660 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 00:58:36.253667 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 00:58:36.253675 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:58:36.253685 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:58:36.253693 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:58:36.253703 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 00:58:36.253748 systemd-journald[203]: Collecting audit messages is disabled. Jan 23 00:58:36.253885 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:58:36.253899 systemd-journald[203]: Journal started Jan 23 00:58:36.253916 systemd-journald[203]: Runtime Journal (/run/log/journal/b08b7e9028654be393a847c8b75bddeb) is 6M, max 48.3M, 42.2M free. Jan 23 00:58:36.263582 systemd-modules-load[204]: Inserted module 'overlay' Jan 23 00:58:36.289148 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:58:36.265032 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 00:58:36.287535 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:58:36.333726 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:58:36.355203 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:58:36.364108 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:58:36.430684 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:58:36.450317 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 00:58:36.453948 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 00:58:36.465727 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:58:36.504296 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 23 00:58:36.505089 kernel: Bridge firewalling registered Jan 23 00:58:36.507190 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:58:36.517184 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:58:36.585552 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:58:37.155740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:58:37.176446 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 00:58:37.196265 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:58:37.273646 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:58:37.276503 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 00:58:37.320579 systemd-resolved[235]: Positive Trust Anchors: Jan 23 00:58:37.320667 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:58:37.320706 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:58:37.412923 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:58:37.335465 systemd-resolved[235]: Defaulting to hostname 'linux'. Jan 23 00:58:37.349582 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:58:37.422205 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:58:37.787211 kernel: SCSI subsystem initialized Jan 23 00:58:37.814138 kernel: Loading iSCSI transport class v2.0-870. Jan 23 00:58:37.869328 kernel: iscsi: registered transport (tcp) Jan 23 00:58:37.916697 kernel: iscsi: registered transport (qla4xxx) Jan 23 00:58:37.917175 kernel: QLogic iSCSI HBA Driver Jan 23 00:58:38.000463 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:58:38.077095 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:58:38.109561 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:58:38.441599 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 00:58:38.480639 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 00:58:38.738186 kernel: raid6: avx2x4 gen() 23267 MB/s Jan 23 00:58:38.758416 kernel: raid6: avx2x2 gen() 6786 MB/s Jan 23 00:58:38.785651 kernel: raid6: avx2x1 gen() 13333 MB/s Jan 23 00:58:38.786232 kernel: raid6: using algorithm avx2x4 gen() 23267 MB/s Jan 23 00:58:38.815101 kernel: raid6: .... xor() 3938 MB/s, rmw enabled Jan 23 00:58:38.815641 kernel: raid6: using avx2x2 recovery algorithm Jan 23 00:58:38.892991 kernel: xor: automatically using best checksumming function avx Jan 23 00:58:39.873504 kernel: hrtimer: interrupt took 4687323 ns Jan 23 00:58:41.233234 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 00:58:41.412914 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:58:41.523234 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:58:42.506558 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jan 23 00:58:42.530983 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:58:42.579591 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 00:58:42.768754 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Jan 23 00:58:42.979725 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:58:42.998119 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:58:43.274620 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:58:43.293031 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 00:58:43.583174 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 00:58:43.605638 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 00:58:43.620523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:58:43.650677 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 23 00:58:43.620712 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:58:43.699059 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 00:58:43.699090 kernel: GPT:9289727 != 19775487 Jan 23 00:58:43.699105 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 00:58:43.699119 kernel: GPT:9289727 != 19775487 Jan 23 00:58:43.699135 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 00:58:43.699159 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 00:58:43.651540 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:58:43.655634 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:58:43.710144 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:58:43.791072 kernel: libata version 3.00 loaded. Jan 23 00:58:43.828156 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 00:58:43.838053 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 00:58:43.861185 kernel: AES CTR mode by8 optimization enabled Jan 23 00:58:43.861236 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 00:58:43.870165 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 00:58:43.878952 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 00:58:43.901459 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 00:58:43.981242 kernel: scsi host0: ahci Jan 23 00:58:43.984221 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 00:58:44.822445 kernel: scsi host1: ahci Jan 23 00:58:44.822948 kernel: scsi host2: ahci Jan 23 00:58:44.823191 kernel: scsi host3: ahci Jan 23 00:58:44.823510 kernel: scsi host4: ahci Jan 23 00:58:44.824143 kernel: scsi host5: ahci Jan 23 00:58:44.824453 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Jan 23 00:58:44.824471 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Jan 23 00:58:44.824487 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Jan 23 00:58:44.824501 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Jan 23 00:58:44.824522 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Jan 23 00:58:44.824535 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Jan 23 00:58:44.824549 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 00:58:44.824565 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 00:58:44.824581 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 00:58:44.824597 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 00:58:44.824612 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 00:58:44.824627 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 00:58:44.824728 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 00:58:44.824752 kernel: ata3.00: applying bridge limits Jan 23 00:58:44.824923 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 00:58:44.824943 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 00:58:44.824959 kernel: ata3.00: configured for UDMA/100 Jan 23 00:58:44.824974 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 00:58:44.825236 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 00:58:44.825706 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 00:58:44.825723 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 00:58:44.860617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:58:44.911936 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 00:58:44.970218 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 00:58:44.985079 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 00:58:45.017721 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 00:58:45.068157 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 00:58:45.173641 disk-uuid[624]: Primary Header is updated. Jan 23 00:58:45.173641 disk-uuid[624]: Secondary Entries is updated. Jan 23 00:58:45.173641 disk-uuid[624]: Secondary Header is updated. Jan 23 00:58:45.207212 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 00:58:45.515607 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 00:58:45.516925 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:58:45.570145 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:58:45.589634 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:58:45.591994 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 00:58:45.692603 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:58:46.267956 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 00:58:46.272497 disk-uuid[625]: The operation has completed successfully. Jan 23 00:58:46.384528 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 00:58:46.385059 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 00:58:46.494461 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 00:58:46.555266 sh[649]: Success Jan 23 00:58:46.657239 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 00:58:46.657324 kernel: device-mapper: uevent: version 1.0.3 Jan 23 00:58:46.682052 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 00:58:46.778037 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 00:58:46.942320 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 00:58:46.945991 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 00:58:47.013088 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 00:58:47.071990 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (661) Jan 23 00:58:47.097471 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 00:58:47.101532 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:58:47.196213 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 00:58:47.196755 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 00:58:47.200925 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 00:58:47.202226 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:58:47.248154 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 00:58:47.252000 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 00:58:47.318327 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 00:58:47.402287 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (684) Jan 23 00:58:47.415149 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:58:47.415540 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:58:47.476513 kernel: BTRFS info (device vda6): turning on async discard Jan 23 00:58:47.476964 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 00:58:47.504208 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:58:47.522292 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 00:58:47.539714 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 00:58:47.959644 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:58:48.078571 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:58:48.294758 systemd-networkd[830]: lo: Link UP Jan 23 00:58:48.295046 systemd-networkd[830]: lo: Gained carrier Jan 23 00:58:48.305092 systemd-networkd[830]: Enumeration completed Jan 23 00:58:48.308329 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:58:48.319355 systemd-networkd[830]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:58:48.319457 systemd-networkd[830]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:58:48.326991 systemd[1]: Reached target network.target - Network. Jan 23 00:58:48.354311 systemd-networkd[830]: eth0: Link UP Jan 23 00:58:48.354984 systemd-networkd[830]: eth0: Gained carrier Jan 23 00:58:48.355005 systemd-networkd[830]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:58:48.471491 systemd-networkd[830]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 00:58:48.602223 ignition[741]: Ignition 2.22.0 Jan 23 00:58:48.602323 ignition[741]: Stage: fetch-offline Jan 23 00:58:48.602568 ignition[741]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:58:48.602584 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 00:58:48.603006 ignition[741]: parsed url from cmdline: "" Jan 23 00:58:48.603012 ignition[741]: no config URL provided Jan 23 00:58:48.603022 ignition[741]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:58:48.603036 ignition[741]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:58:48.603262 ignition[741]: op(1): [started] loading QEMU firmware config module Jan 23 00:58:48.603270 ignition[741]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 00:58:48.675287 ignition[741]: op(1): [finished] loading QEMU firmware config module Jan 23 00:58:49.445548 systemd-networkd[830]: eth0: Gained IPv6LL Jan 23 00:58:49.780291 ignition[741]: parsing config with SHA512: 413cbeee892a0c7b764da4a29eb6b0b7399aa4f50bfe88a0e1563354c07da371104c6492b32b5705ae321a865098097fbfed342e9a0460d3f7102b17dc113329 Jan 23 00:58:49.833754 unknown[741]: fetched base config from "system" Jan 23 00:58:49.834143 unknown[741]: fetched user config from "qemu" Jan 23 00:58:49.837148 ignition[741]: fetch-offline: fetch-offline passed Jan 23 00:58:49.842279 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:58:49.838221 ignition[741]: Ignition finished successfully Jan 23 00:58:49.859226 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 00:58:49.861125 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 00:58:50.063151 ignition[844]: Ignition 2.22.0 Jan 23 00:58:50.063252 ignition[844]: Stage: kargs Jan 23 00:58:50.063542 ignition[844]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:58:50.063560 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 00:58:50.097273 ignition[844]: kargs: kargs passed Jan 23 00:58:50.097552 ignition[844]: Ignition finished successfully Jan 23 00:58:50.114637 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 00:58:50.130176 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 00:58:50.217186 ignition[852]: Ignition 2.22.0 Jan 23 00:58:50.217295 ignition[852]: Stage: disks Jan 23 00:58:50.217597 ignition[852]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:58:50.229242 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 00:58:50.217614 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 00:58:50.223094 ignition[852]: disks: disks passed Jan 23 00:58:50.223204 ignition[852]: Ignition finished successfully Jan 23 00:58:50.286636 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 00:58:50.310259 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 00:58:50.310714 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:58:50.334736 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:58:50.364569 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:58:50.402353 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 00:58:50.478924 systemd-fsck[863]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 00:58:50.496574 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 00:58:50.513038 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 00:58:50.997090 kernel: EXT4-fs (vda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 00:58:50.999538 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 00:58:51.009015 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 00:58:51.019684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:58:51.077719 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 00:58:51.108627 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (871) Jan 23 00:58:51.089245 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 00:58:51.160121 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:58:51.160160 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:58:51.089339 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 00:58:51.089650 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:58:51.148673 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 00:58:51.162725 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 00:58:51.284349 kernel: BTRFS info (device vda6): turning on async discard Jan 23 00:58:51.285303 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 00:58:51.290344 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:58:51.371511 initrd-setup-root[895]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 00:58:51.405481 initrd-setup-root[902]: cut: /sysroot/etc/group: No such file or directory Jan 23 00:58:51.436012 initrd-setup-root[909]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 00:58:51.490257 initrd-setup-root[916]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 00:58:51.872601 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 00:58:51.876185 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 00:58:51.924074 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 00:58:51.959498 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 00:58:51.977094 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:58:52.048348 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 00:58:52.133637 ignition[984]: INFO : Ignition 2.22.0 Jan 23 00:58:52.133637 ignition[984]: INFO : Stage: mount Jan 23 00:58:52.163327 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:58:52.163327 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 00:58:52.163327 ignition[984]: INFO : mount: mount passed Jan 23 00:58:52.163327 ignition[984]: INFO : Ignition finished successfully Jan 23 00:58:52.142014 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 00:58:52.165639 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 00:58:52.251251 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:58:52.339565 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (997) Jan 23 00:58:52.360195 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:58:52.360248 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:58:52.399020 kernel: BTRFS info (device vda6): turning on async discard Jan 23 00:58:52.399109 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 00:58:52.405705 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:58:52.528999 ignition[1014]: INFO : Ignition 2.22.0 Jan 23 00:58:52.528999 ignition[1014]: INFO : Stage: files Jan 23 00:58:52.545577 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:58:52.545577 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 00:58:52.569289 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Jan 23 00:58:52.581137 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 00:58:52.581137 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 00:58:52.623747 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 00:58:52.640605 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 00:58:52.657124 unknown[1014]: wrote ssh authorized keys file for user: core Jan 23 00:58:52.673502 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 00:58:52.698255 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 00:58:52.698255 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 00:58:52.833620 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 00:58:52.979194 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 00:58:52.979194 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 00:58:53.015744 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 00:58:53.036667 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:58:53.054747 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:58:53.054747 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:58:53.097225 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:58:53.097225 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:58:53.097225 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:58:53.097225 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:58:53.097225 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:58:53.097225 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 00:58:53.097225 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 00:58:53.097225 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 00:58:53.097225 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 00:58:53.352747 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 00:58:54.320038 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 00:58:54.320038 ignition[1014]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 00:58:54.371487 ignition[1014]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:58:54.371487 ignition[1014]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:58:54.371487 ignition[1014]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 00:58:54.371487 ignition[1014]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 00:58:54.371487 ignition[1014]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 00:58:54.371487 ignition[1014]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 00:58:54.371487 ignition[1014]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 00:58:54.371487 ignition[1014]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 00:58:54.555736 ignition[1014]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 00:58:54.576121 ignition[1014]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 00:58:54.596126 ignition[1014]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 00:58:54.596126 ignition[1014]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 23 00:58:54.633343 ignition[1014]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 00:58:54.633343 ignition[1014]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:58:54.633343 ignition[1014]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:58:54.633343 ignition[1014]: INFO : files: files passed Jan 23 00:58:54.633343 ignition[1014]: INFO : Ignition finished successfully Jan 23 00:58:54.719601 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 00:58:54.771193 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 00:58:54.777928 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 00:58:54.861167 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 00:58:54.873748 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 00:58:54.904494 initrd-setup-root-after-ignition[1043]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 00:58:54.936166 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:58:54.973085 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:58:54.973085 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:58:55.002029 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:58:55.056212 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 00:58:55.088113 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 00:58:55.277039 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 00:58:55.277306 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 00:58:55.298272 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 00:58:55.307171 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 00:58:55.328284 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 00:58:55.330314 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 00:58:55.488345 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:58:55.521019 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 00:58:55.617205 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:58:55.629351 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:58:55.644128 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 00:58:55.644588 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 00:58:55.645263 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:58:55.689538 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 00:58:55.744242 systemd[1]: Stopped target basic.target - Basic System. Jan 23 00:58:55.754600 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 00:58:55.767529 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:58:55.813702 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 00:58:55.826579 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:58:55.864660 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 00:58:55.867326 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:58:55.909366 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 00:58:55.924275 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 00:58:55.952185 systemd[1]: Stopped target swap.target - Swaps. Jan 23 00:58:55.993500 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 00:58:55.994719 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:58:56.052036 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:58:56.072595 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:58:56.094363 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 00:58:56.096729 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:58:56.125547 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 00:58:56.126198 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 00:58:56.154083 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 00:58:56.154530 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:58:56.180147 systemd[1]: Stopped target paths.target - Path Units. Jan 23 00:58:56.205611 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 00:58:56.210672 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:58:56.257723 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 00:58:56.301658 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 00:58:56.309677 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 00:58:56.310016 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:58:56.373158 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 00:58:56.373523 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:58:56.407622 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 00:58:56.408147 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:58:56.427575 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 00:58:56.428073 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 00:58:56.484374 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 00:58:56.517695 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 00:58:56.573054 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 00:58:56.573589 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:58:56.670640 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 00:58:56.671178 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:58:56.725756 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 00:58:56.726354 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 00:58:56.871049 ignition[1069]: INFO : Ignition 2.22.0 Jan 23 00:58:56.871049 ignition[1069]: INFO : Stage: umount Jan 23 00:58:56.900226 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:58:56.900226 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 00:58:56.900226 ignition[1069]: INFO : umount: umount passed Jan 23 00:58:56.900226 ignition[1069]: INFO : Ignition finished successfully Jan 23 00:58:56.887757 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 00:58:56.888582 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 00:58:56.915141 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 00:58:56.916050 systemd[1]: Stopped target network.target - Network. Jan 23 00:58:56.966542 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 00:58:56.967042 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 00:58:56.999534 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 00:58:56.999653 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 00:58:57.043740 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 00:58:57.067316 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 00:58:57.075198 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 00:58:57.075280 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 00:58:57.089284 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 00:58:57.096170 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 00:58:57.096746 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 00:58:57.097182 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 00:58:57.098182 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 00:58:57.098292 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 00:58:57.189617 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 00:58:57.190343 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 00:58:57.285752 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 00:58:57.289129 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 00:58:57.289527 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 00:58:57.499489 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 00:58:57.507270 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 00:58:57.580615 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 00:58:57.583704 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:58:57.682376 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 00:58:57.906245 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 00:58:57.906623 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:58:58.017358 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:58:58.048698 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:58:58.115256 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 00:58:58.115609 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 00:58:58.130048 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 00:58:58.130249 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:58:58.226195 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:58:58.245147 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 00:58:58.245320 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:58:58.382088 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 00:58:58.383096 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:58:58.431227 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 00:58:58.431543 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 00:58:58.470705 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 00:58:58.474311 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 00:58:58.482981 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 00:58:58.483044 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:58:58.515566 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 00:58:58.515668 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:58:58.546647 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 00:58:58.547217 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 00:58:58.587335 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 00:58:58.587558 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:58:58.634692 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 00:58:58.671535 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 00:58:58.674724 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:58:58.718378 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 00:58:58.718605 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:58:58.808686 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 00:58:58.809198 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:58:58.847106 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 00:58:58.847305 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:58:58.879386 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:58:58.879719 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:58:58.939346 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 00:58:58.939568 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 00:58:58.939647 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 00:58:58.939736 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:58:58.944195 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 00:58:58.945269 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 00:58:58.988324 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 00:58:59.020758 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 00:58:59.194232 systemd[1]: Switching root. Jan 23 00:58:59.294236 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 23 00:58:59.294340 systemd-journald[203]: Journal stopped Jan 23 00:59:08.585660 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 00:59:08.585753 kernel: SELinux: policy capability open_perms=1 Jan 23 00:59:08.585965 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 00:59:08.585987 kernel: SELinux: policy capability always_check_network=0 Jan 23 00:59:08.586012 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 00:59:08.586030 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 00:59:08.586045 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 00:59:08.586067 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 00:59:08.586086 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 00:59:08.586102 kernel: audit: type=1403 audit(1769129939.780:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 00:59:08.586126 systemd[1]: Successfully loaded SELinux policy in 184.361ms. Jan 23 00:59:08.586151 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.966ms. Jan 23 00:59:08.586180 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:59:08.586203 systemd[1]: Detected virtualization kvm. Jan 23 00:59:08.586219 systemd[1]: Detected architecture x86-64. Jan 23 00:59:08.586235 systemd[1]: Detected first boot. Jan 23 00:59:08.586250 systemd[1]: Initializing machine ID from VM UUID. Jan 23 00:59:08.586265 zram_generator::config[1115]: No configuration found. Jan 23 00:59:08.586293 kernel: Guest personality initialized and is inactive Jan 23 00:59:08.586402 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 00:59:08.586518 kernel: Initialized host personality Jan 23 00:59:08.586541 kernel: NET: Registered PF_VSOCK protocol family Jan 23 00:59:08.586557 systemd[1]: Populated /etc with preset unit settings. Jan 23 00:59:08.586574 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 00:59:08.586593 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 00:59:08.586612 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 00:59:08.586633 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 00:59:08.586650 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 00:59:08.586666 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 00:59:08.586975 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 00:59:08.586998 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 00:59:08.587119 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 00:59:08.587141 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 00:59:08.587157 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 00:59:08.587173 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 00:59:08.587188 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:59:08.587206 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:59:08.587225 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 00:59:08.587249 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 00:59:08.587266 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 00:59:08.587282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:59:08.587298 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 00:59:08.587317 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:59:08.587336 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:59:08.587354 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 00:59:08.587369 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 00:59:08.587389 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 00:59:08.587534 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 00:59:08.587559 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:59:08.587576 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:59:08.587592 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:59:08.587607 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:59:08.587625 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 00:59:08.587644 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 00:59:08.587662 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 00:59:08.587683 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:59:08.587699 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:59:08.587715 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:59:08.587733 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 00:59:08.587752 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 00:59:08.587957 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 00:59:08.587980 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 00:59:08.587998 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:59:08.588014 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 00:59:08.588036 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 00:59:08.588055 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 00:59:08.588075 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 00:59:08.588093 systemd[1]: Reached target machines.target - Containers. Jan 23 00:59:08.588111 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 00:59:08.588127 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:59:08.588142 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:59:08.588162 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 00:59:08.588186 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:59:08.588202 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:59:08.588218 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:59:08.588234 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 00:59:08.588250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:59:08.588269 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 00:59:08.588289 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 00:59:08.588305 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 00:59:08.588321 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 00:59:08.588341 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 00:59:08.588361 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:59:08.588381 kernel: fuse: init (API version 7.41) Jan 23 00:59:08.588399 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:59:08.588414 kernel: loop: module loaded Jan 23 00:59:08.588545 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:59:08.588564 kernel: ACPI: bus type drm_connector registered Jan 23 00:59:08.588583 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:59:08.588606 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 00:59:08.588628 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 00:59:08.588644 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:59:08.588698 systemd-journald[1200]: Collecting audit messages is disabled. Jan 23 00:59:08.588735 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 00:59:08.588752 systemd-journald[1200]: Journal started Jan 23 00:59:08.588985 systemd-journald[1200]: Runtime Journal (/run/log/journal/b08b7e9028654be393a847c8b75bddeb) is 6M, max 48.3M, 42.2M free. Jan 23 00:59:05.825071 systemd[1]: Queued start job for default target multi-user.target. Jan 23 00:59:05.911514 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 00:59:05.913546 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 00:59:05.914946 systemd[1]: systemd-journald.service: Consumed 3.192s CPU time. Jan 23 00:59:08.614324 systemd[1]: Stopped verity-setup.service. Jan 23 00:59:08.614411 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:59:08.672082 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:59:08.688614 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 00:59:08.702544 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 00:59:08.715303 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 00:59:08.727188 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 00:59:08.751228 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 00:59:08.775328 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 00:59:08.803954 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 00:59:08.822645 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:59:08.869094 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 00:59:08.872320 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 00:59:08.889416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:59:08.897259 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:59:08.917387 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:59:08.918208 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:59:08.940101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:59:08.941938 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:59:08.960646 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 00:59:08.961403 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 00:59:09.020332 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:59:09.023265 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:59:09.053344 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:59:09.071344 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:59:09.090709 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 00:59:09.123170 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 00:59:09.162649 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:59:09.224979 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:59:09.274379 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 00:59:09.315585 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 00:59:09.369950 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 00:59:09.370205 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:59:09.395062 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 00:59:09.434124 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 00:59:09.476205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:59:09.484074 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 00:59:09.517126 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 00:59:09.532200 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:59:09.567114 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 00:59:09.587703 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:59:09.615211 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:59:09.632652 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 00:59:09.669295 systemd-journald[1200]: Time spent on flushing to /var/log/journal/b08b7e9028654be393a847c8b75bddeb is 97.820ms for 980 entries. Jan 23 00:59:09.669295 systemd-journald[1200]: System Journal (/var/log/journal/b08b7e9028654be393a847c8b75bddeb) is 8M, max 195.6M, 187.6M free. Jan 23 00:59:09.813055 systemd-journald[1200]: Received client request to flush runtime journal. Jan 23 00:59:09.670331 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:59:09.722603 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 00:59:09.745276 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 00:59:09.797632 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 00:59:09.848285 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 00:59:09.885112 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 00:59:09.896325 kernel: loop0: detected capacity change from 0 to 128560 Jan 23 00:59:09.994190 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 00:59:10.264687 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 00:59:10.491519 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:59:10.799084 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 00:59:10.816555 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 00:59:10.826012 kernel: loop1: detected capacity change from 0 to 110984 Jan 23 00:59:10.826616 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jan 23 00:59:10.826738 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jan 23 00:59:10.862178 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:59:10.894210 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 00:59:10.997006 kernel: loop2: detected capacity change from 0 to 229808 Jan 23 00:59:11.185221 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 00:59:11.254196 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 00:59:11.252615 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:59:11.394687 kernel: loop4: detected capacity change from 0 to 110984 Jan 23 00:59:11.592627 kernel: loop5: detected capacity change from 0 to 229808 Jan 23 00:59:12.067284 (sd-merge)[1257]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 23 00:59:12.073041 (sd-merge)[1257]: Merged extensions into '/usr'. Jan 23 00:59:12.121396 systemd[1]: Reload requested from client PID 1235 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 00:59:12.121629 systemd[1]: Reloading... Jan 23 00:59:12.266201 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 23 00:59:12.266232 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 23 00:59:12.527937 zram_generator::config[1284]: No configuration found. Jan 23 00:59:15.220701 systemd[1]: Reloading finished in 3093 ms. Jan 23 00:59:15.315544 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:59:15.367753 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 00:59:15.429273 ldconfig[1230]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 00:59:15.474378 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 00:59:15.650334 systemd[1]: Starting ensure-sysext.service... Jan 23 00:59:15.715555 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:59:15.916179 systemd[1]: Reload requested from client PID 1327 ('systemctl') (unit ensure-sysext.service)... Jan 23 00:59:15.916294 systemd[1]: Reloading... Jan 23 00:59:15.967988 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 00:59:15.969344 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 00:59:15.971264 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 00:59:15.972043 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 00:59:15.974227 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 00:59:15.975022 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jan 23 00:59:15.975128 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jan 23 00:59:15.989351 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:59:15.989368 systemd-tmpfiles[1328]: Skipping /boot Jan 23 00:59:16.131941 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:59:16.131970 systemd-tmpfiles[1328]: Skipping /boot Jan 23 00:59:16.286195 zram_generator::config[1355]: No configuration found. Jan 23 00:59:17.462172 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1330903417 wd_nsec: 1330902709 Jan 23 00:59:18.228326 systemd[1]: Reloading finished in 2310 ms. Jan 23 00:59:18.302669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:59:18.614005 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:59:18.809048 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 00:59:18.825655 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 00:59:18.859750 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:59:18.890681 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 00:59:18.907197 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 00:59:18.952160 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:59:18.982646 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 00:59:19.254597 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 00:59:19.285577 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 00:59:19.318554 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:59:19.319365 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:59:19.323105 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:59:19.342681 augenrules[1423]: No rules Jan 23 00:59:19.347254 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:59:19.368420 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:59:19.386182 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:59:19.388112 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:59:19.398025 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 00:59:19.410759 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:59:19.417619 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:59:19.418996 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:59:19.437231 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 00:59:19.443951 systemd-udevd[1405]: Using default interface naming scheme 'v255'. Jan 23 00:59:19.456665 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 00:59:19.471969 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:59:19.473037 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:59:19.497740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:59:19.498566 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:59:19.516341 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:59:19.517255 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:59:19.543154 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 00:59:19.598398 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:59:19.604296 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:59:19.620644 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:59:19.625660 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:59:19.662332 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:59:19.688644 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:59:19.723988 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:59:19.736380 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:59:19.742563 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:59:19.743162 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 00:59:19.743410 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:59:19.750167 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:59:19.768328 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:59:19.770029 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:59:19.785030 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:59:19.785391 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:59:19.867243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:59:19.882592 systemd[1]: Finished ensure-sysext.service. Jan 23 00:59:19.898317 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:59:19.902116 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:59:19.920623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:59:19.921709 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:59:19.949996 systemd-resolved[1397]: Positive Trust Anchors: Jan 23 00:59:19.950023 systemd-resolved[1397]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:59:19.950064 systemd-resolved[1397]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:59:19.953088 augenrules[1442]: /sbin/augenrules: No change Jan 23 00:59:19.956674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:59:19.957232 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:59:19.970140 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 00:59:19.991318 systemd-resolved[1397]: Defaulting to hostname 'linux'. Jan 23 00:59:20.010388 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:59:20.031422 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:59:20.100098 augenrules[1498]: No rules Jan 23 00:59:20.102726 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:59:20.103329 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:59:20.218996 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 00:59:20.501574 systemd-networkd[1485]: lo: Link UP Jan 23 00:59:20.501588 systemd-networkd[1485]: lo: Gained carrier Jan 23 00:59:20.505567 systemd-networkd[1485]: Enumeration completed Jan 23 00:59:20.513195 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:59:20.529408 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 00:59:20.562379 systemd[1]: Reached target network.target - Network. Jan 23 00:59:20.575213 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:59:20.582366 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:59:20.582570 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:59:20.587198 systemd-networkd[1485]: eth0: Link UP Jan 23 00:59:20.588662 systemd-networkd[1485]: eth0: Gained carrier Jan 23 00:59:20.588693 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:59:20.589120 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 00:59:20.607392 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 00:59:20.627664 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 00:59:20.653583 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 00:59:20.672544 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 00:59:20.672608 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:59:20.684166 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 00:59:20.703359 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 00:59:20.727651 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 00:59:20.748033 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:59:20.749051 systemd-networkd[1485]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 00:59:20.765589 systemd-timesyncd[1491]: Network configuration changed, trying to establish connection. Jan 23 00:59:20.849433 systemd-timesyncd[1491]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 00:59:20.937357 systemd-timesyncd[1491]: Initial clock synchronization to Fri 2026-01-23 00:59:21.175815 UTC. Jan 23 00:59:20.980257 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 00:59:21.020375 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 00:59:21.042730 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 00:59:21.066413 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 00:59:21.082629 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 00:59:21.099213 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 00:59:21.160236 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 00:59:21.184579 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 00:59:21.160393 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 00:59:21.188281 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 00:59:21.217432 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 00:59:21.223430 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 00:59:21.258526 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 00:59:21.275647 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 00:59:21.344486 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 00:59:21.385793 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:59:21.402471 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:59:21.423629 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:59:21.423772 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:59:21.428138 kernel: ACPI: button: Power Button [PWRF] Jan 23 00:59:21.441547 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 00:59:21.447712 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 00:59:21.488237 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 00:59:21.515163 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 00:59:21.546289 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 00:59:21.563080 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 00:59:21.571404 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 00:59:21.603018 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 00:59:21.624707 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 00:59:21.686533 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 00:59:21.711306 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 00:59:21.745652 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Refreshing passwd entry cache Jan 23 00:59:21.750996 oslogin_cache_refresh[1538]: Refreshing passwd entry cache Jan 23 00:59:21.775690 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 00:59:21.843281 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 00:59:21.884180 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Failure getting users, quitting Jan 23 00:59:21.884180 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 00:59:21.884180 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Refreshing group entry cache Jan 23 00:59:21.880096 oslogin_cache_refresh[1538]: Failure getting users, quitting Jan 23 00:59:21.880140 oslogin_cache_refresh[1538]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 00:59:21.880458 oslogin_cache_refresh[1538]: Refreshing group entry cache Jan 23 00:59:21.885395 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 00:59:21.891055 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 00:59:21.898624 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 00:59:21.931392 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Failure getting groups, quitting Jan 23 00:59:21.931500 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 00:59:21.931405 oslogin_cache_refresh[1538]: Failure getting groups, quitting Jan 23 00:59:21.931497 oslogin_cache_refresh[1538]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 00:59:21.945628 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 00:59:22.030458 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 00:59:22.135040 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 00:59:22.142706 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 00:59:22.147323 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 00:59:22.419387 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 00:59:22.549130 systemd-networkd[1485]: eth0: Gained IPv6LL Jan 23 00:59:22.577445 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 00:59:22.586013 jq[1536]: false Jan 23 00:59:22.599351 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 00:59:22.608755 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 00:59:22.620156 jq[1549]: true Jan 23 00:59:22.641747 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 00:59:22.656148 extend-filesystems[1537]: Found /dev/vda6 Jan 23 00:59:22.658525 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 00:59:22.776408 (ntainerd)[1567]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 00:59:23.092456 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 00:59:23.106294 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 00:59:23.115957 extend-filesystems[1537]: Found /dev/vda9 Jan 23 00:59:23.166319 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 00:59:23.177692 jq[1564]: true Jan 23 00:59:23.182433 update_engine[1548]: I20260123 00:59:23.179629 1548 main.cc:92] Flatcar Update Engine starting Jan 23 00:59:23.183380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:59:23.265315 extend-filesystems[1537]: Checking size of /dev/vda9 Jan 23 00:59:23.223121 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 00:59:23.238640 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 00:59:23.249152 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 00:59:23.789263 tar[1554]: linux-amd64/LICENSE Jan 23 00:59:24.062690 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 23 00:59:23.775127 dbus-daemon[1534]: [system] SELinux support is enabled Jan 23 00:59:24.090043 extend-filesystems[1537]: Resized partition /dev/vda9 Jan 23 00:59:23.859667 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 00:59:24.239728 tar[1554]: linux-amd64/helm Jan 23 00:59:24.309371 extend-filesystems[1594]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 00:59:24.070654 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 00:59:24.071507 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 00:59:24.099386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:59:24.200356 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 00:59:24.200605 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 00:59:24.755317 systemd[1]: Started update-engine.service - Update Engine. Jan 23 00:59:24.789351 update_engine[1548]: I20260123 00:59:24.788774 1548 update_check_scheduler.cc:74] Next update check in 4m24s Jan 23 00:59:24.795449 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 00:59:24.847023 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 23 00:59:25.179428 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 00:59:25.633612 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 00:59:25.634722 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 00:59:25.635592 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 00:59:25.660071 extend-filesystems[1594]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 00:59:25.660071 extend-filesystems[1594]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 00:59:25.660071 extend-filesystems[1594]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 23 00:59:25.652657 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 00:59:25.676043 extend-filesystems[1537]: Resized filesystem in /dev/vda9 Jan 23 00:59:25.653256 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 00:59:25.734306 bash[1618]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:59:26.086007 sshd_keygen[1557]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 00:59:26.153368 systemd-logind[1547]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 00:59:26.156154 systemd-logind[1547]: New seat seat0. Jan 23 00:59:27.379278 systemd-logind[1547]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 00:59:27.964107 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 00:59:28.374297 kernel: kvm_amd: TSC scaling supported Jan 23 00:59:28.374423 kernel: kvm_amd: Nested Virtualization enabled Jan 23 00:59:28.374447 kernel: kvm_amd: Nested Paging enabled Jan 23 00:59:28.374480 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 00:59:28.377568 kernel: kvm_amd: PMU virtualization is disabled Jan 23 00:59:28.693714 containerd[1567]: time="2026-01-23T00:59:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 00:59:28.702434 containerd[1567]: time="2026-01-23T00:59:28.702234697Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 00:59:28.773749 containerd[1567]: time="2026-01-23T00:59:28.773532494Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.356µs" Jan 23 00:59:28.774178 containerd[1567]: time="2026-01-23T00:59:28.774154711Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 00:59:28.774366 containerd[1567]: time="2026-01-23T00:59:28.774347181Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 00:59:28.777199 containerd[1567]: time="2026-01-23T00:59:28.777171232Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 00:59:28.777634 containerd[1567]: time="2026-01-23T00:59:28.777609439Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 00:59:28.778094 containerd[1567]: time="2026-01-23T00:59:28.778068658Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:59:28.778375 containerd[1567]: time="2026-01-23T00:59:28.778245695Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:59:28.778544 containerd[1567]: time="2026-01-23T00:59:28.778521190Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:59:28.779523 containerd[1567]: time="2026-01-23T00:59:28.779494963Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:59:28.781420 containerd[1567]: time="2026-01-23T00:59:28.780673657Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:59:28.783239 containerd[1567]: time="2026-01-23T00:59:28.782453830Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:59:28.783239 containerd[1567]: time="2026-01-23T00:59:28.782477206Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 00:59:28.789973 containerd[1567]: time="2026-01-23T00:59:28.789717795Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 00:59:28.790623 containerd[1567]: time="2026-01-23T00:59:28.790592100Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:59:28.792955 containerd[1567]: time="2026-01-23T00:59:28.792721905Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:59:28.794679 containerd[1567]: time="2026-01-23T00:59:28.794652349Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 00:59:28.799969 containerd[1567]: time="2026-01-23T00:59:28.799765413Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 00:59:28.805291 containerd[1567]: time="2026-01-23T00:59:28.802032151Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 00:59:28.805291 containerd[1567]: time="2026-01-23T00:59:28.802153070Z" level=info msg="metadata content store policy set" policy=shared Jan 23 00:59:28.870115 containerd[1567]: time="2026-01-23T00:59:28.869517377Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 00:59:28.871129 containerd[1567]: time="2026-01-23T00:59:28.870245363Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 00:59:28.871129 containerd[1567]: time="2026-01-23T00:59:28.870269542Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 00:59:28.871129 containerd[1567]: time="2026-01-23T00:59:28.870282065Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 00:59:28.871129 containerd[1567]: time="2026-01-23T00:59:28.870298517Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 00:59:28.871129 containerd[1567]: time="2026-01-23T00:59:28.870308933Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 00:59:28.871129 containerd[1567]: time="2026-01-23T00:59:28.870319896Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 00:59:28.871129 containerd[1567]: time="2026-01-23T00:59:28.870330872Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 00:59:28.871129 containerd[1567]: time="2026-01-23T00:59:28.870342020Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 00:59:28.871129 containerd[1567]: time="2026-01-23T00:59:28.870352282Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 00:59:28.871129 containerd[1567]: time="2026-01-23T00:59:28.870362443Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 00:59:28.871129 containerd[1567]: time="2026-01-23T00:59:28.870624469Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 00:59:28.875420 containerd[1567]: time="2026-01-23T00:59:28.874223182Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 00:59:28.875420 containerd[1567]: time="2026-01-23T00:59:28.874267449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 00:59:28.875420 containerd[1567]: time="2026-01-23T00:59:28.874294805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 00:59:28.875420 containerd[1567]: time="2026-01-23T00:59:28.874311034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 00:59:28.875420 containerd[1567]: time="2026-01-23T00:59:28.874324788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 00:59:28.875420 containerd[1567]: time="2026-01-23T00:59:28.874337963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 00:59:28.875420 containerd[1567]: time="2026-01-23T00:59:28.874353030Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 00:59:28.875420 containerd[1567]: time="2026-01-23T00:59:28.874365878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 00:59:28.875420 containerd[1567]: time="2026-01-23T00:59:28.874380437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 00:59:28.875420 containerd[1567]: time="2026-01-23T00:59:28.874393987Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 00:59:28.875420 containerd[1567]: time="2026-01-23T00:59:28.874408159Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 00:59:28.875420 containerd[1567]: time="2026-01-23T00:59:28.874559550Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 00:59:28.875420 containerd[1567]: time="2026-01-23T00:59:28.874579270Z" level=info msg="Start snapshots syncer" Jan 23 00:59:28.875420 containerd[1567]: time="2026-01-23T00:59:28.874681049Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 00:59:28.879985 containerd[1567]: time="2026-01-23T00:59:28.878711668Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 00:59:28.879985 containerd[1567]: time="2026-01-23T00:59:28.879168526Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 00:59:28.880682 containerd[1567]: time="2026-01-23T00:59:28.879242918Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 00:59:28.880682 containerd[1567]: time="2026-01-23T00:59:28.879396568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 00:59:28.880682 containerd[1567]: time="2026-01-23T00:59:28.879429829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 00:59:28.880682 containerd[1567]: time="2026-01-23T00:59:28.879446708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 00:59:28.880682 containerd[1567]: time="2026-01-23T00:59:28.879460779Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 00:59:28.880682 containerd[1567]: time="2026-01-23T00:59:28.879475999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 00:59:28.880682 containerd[1567]: time="2026-01-23T00:59:28.879489916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 00:59:28.880682 containerd[1567]: time="2026-01-23T00:59:28.879504251Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 00:59:28.880682 containerd[1567]: time="2026-01-23T00:59:28.879633030Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 00:59:28.880682 containerd[1567]: time="2026-01-23T00:59:28.879652770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 00:59:28.880682 containerd[1567]: time="2026-01-23T00:59:28.879666749Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 00:59:28.880682 containerd[1567]: time="2026-01-23T00:59:28.879710812Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:59:28.880682 containerd[1567]: time="2026-01-23T00:59:28.879729463Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:59:28.880682 containerd[1567]: time="2026-01-23T00:59:28.879741243Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:59:28.881438 containerd[1567]: time="2026-01-23T00:59:28.879754172Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:59:28.881438 containerd[1567]: time="2026-01-23T00:59:28.879765371Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 00:59:28.881601 containerd[1567]: time="2026-01-23T00:59:28.881573032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 00:59:28.882055 containerd[1567]: time="2026-01-23T00:59:28.881764158Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 00:59:28.882159 containerd[1567]: time="2026-01-23T00:59:28.882141340Z" level=info msg="runtime interface created" Jan 23 00:59:28.882226 containerd[1567]: time="2026-01-23T00:59:28.882210244Z" level=info msg="created NRI interface" Jan 23 00:59:28.882291 containerd[1567]: time="2026-01-23T00:59:28.882275351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 00:59:28.882355 containerd[1567]: time="2026-01-23T00:59:28.882342098Z" level=info msg="Connect containerd service" Jan 23 00:59:28.882430 containerd[1567]: time="2026-01-23T00:59:28.882416591Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 00:59:28.885302 containerd[1567]: time="2026-01-23T00:59:28.885273242Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:59:28.949961 kernel: EDAC MC: Ver: 3.0.0 Jan 23 00:59:29.009578 tar[1554]: linux-amd64/README.md Jan 23 00:59:29.180512 containerd[1567]: time="2026-01-23T00:59:29.180275765Z" level=info msg="Start subscribing containerd event" Jan 23 00:59:29.182030 containerd[1567]: time="2026-01-23T00:59:29.180303758Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 00:59:29.182030 containerd[1567]: time="2026-01-23T00:59:29.181114088Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 00:59:29.183747 containerd[1567]: time="2026-01-23T00:59:29.183192568Z" level=info msg="Start recovering state" Jan 23 00:59:29.186019 containerd[1567]: time="2026-01-23T00:59:29.183993966Z" level=info msg="Start event monitor" Jan 23 00:59:29.186019 containerd[1567]: time="2026-01-23T00:59:29.184019999Z" level=info msg="Start cni network conf syncer for default" Jan 23 00:59:29.186019 containerd[1567]: time="2026-01-23T00:59:29.184042393Z" level=info msg="Start streaming server" Jan 23 00:59:29.186019 containerd[1567]: time="2026-01-23T00:59:29.184062968Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 00:59:29.186019 containerd[1567]: time="2026-01-23T00:59:29.184073810Z" level=info msg="runtime interface starting up..." Jan 23 00:59:29.186019 containerd[1567]: time="2026-01-23T00:59:29.184082740Z" level=info msg="starting plugins..." Jan 23 00:59:29.186019 containerd[1567]: time="2026-01-23T00:59:29.184110022Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 00:59:29.186019 containerd[1567]: time="2026-01-23T00:59:29.184475652Z" level=info msg="containerd successfully booted in 0.496175s" Jan 23 00:59:32.981173 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2978525271 wd_nsec: 2978524211 Jan 23 00:59:38.413550 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 00:59:38.448678 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 00:59:38.466082 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 00:59:38.496157 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 00:59:38.528342 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:59:38.645636 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 00:59:38.781743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:59:38.838510 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:59:38.905357 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 00:59:38.975670 systemd[1]: Started sshd@0-10.0.0.18:22-10.0.0.1:34186.service - OpenSSH per-connection server daemon (10.0.0.1:34186). Jan 23 00:59:39.006436 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 00:59:39.150496 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 00:59:39.153425 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 00:59:39.187506 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 00:59:39.805956 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 00:59:39.888552 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 00:59:39.969626 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 00:59:39.987350 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 00:59:40.006627 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 00:59:40.058029 systemd[1]: Startup finished in 9.408s (kernel) + 24.649s (initrd) + 40.457s (userspace) = 1min 14.516s. Jan 23 00:59:40.799716 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 34186 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:59:40.803761 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:40.850581 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 00:59:40.854665 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 00:59:40.885121 systemd-logind[1547]: New session 1 of user core. Jan 23 00:59:40.968392 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 00:59:40.980322 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 00:59:41.045565 (systemd)[1697]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 00:59:41.053338 systemd-logind[1547]: New session c1 of user core. Jan 23 00:59:42.925365 systemd[1697]: Queued start job for default target default.target. Jan 23 00:59:42.959552 systemd[1697]: Created slice app.slice - User Application Slice. Jan 23 00:59:42.959689 systemd[1697]: Reached target paths.target - Paths. Jan 23 00:59:42.960075 systemd[1697]: Reached target timers.target - Timers. Jan 23 00:59:42.963268 systemd[1697]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 00:59:43.084675 systemd[1697]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 00:59:43.085569 systemd[1697]: Reached target sockets.target - Sockets. Jan 23 00:59:43.085630 systemd[1697]: Reached target basic.target - Basic System. Jan 23 00:59:43.085696 systemd[1697]: Reached target default.target - Main User Target. Jan 23 00:59:43.085748 systemd[1697]: Startup finished in 2.006s. Jan 23 00:59:43.087181 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 00:59:43.106079 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 00:59:44.154252 systemd[1]: Started sshd@1-10.0.0.18:22-10.0.0.1:39364.service - OpenSSH per-connection server daemon (10.0.0.1:39364). Jan 23 00:59:44.605136 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 39364 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:59:44.612094 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:44.659054 systemd-logind[1547]: New session 2 of user core. Jan 23 00:59:44.674697 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 00:59:45.807245 sshd[1712]: Connection closed by 10.0.0.1 port 39364 Jan 23 00:59:45.817171 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:45.867965 systemd[1]: sshd@1-10.0.0.18:22-10.0.0.1:39364.service: Deactivated successfully. Jan 23 00:59:45.876089 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 00:59:45.879298 systemd-logind[1547]: Session 2 logged out. Waiting for processes to exit. Jan 23 00:59:45.891080 systemd[1]: Started sshd@2-10.0.0.18:22-10.0.0.1:39380.service - OpenSSH per-connection server daemon (10.0.0.1:39380). Jan 23 00:59:45.906891 systemd-logind[1547]: Removed session 2. Jan 23 00:59:46.734042 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 39380 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:59:46.740710 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:46.779454 systemd-logind[1547]: New session 3 of user core. Jan 23 00:59:46.796228 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 00:59:46.921117 sshd[1722]: Connection closed by 10.0.0.1 port 39380 Jan 23 00:59:46.920458 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:46.939573 systemd[1]: sshd@2-10.0.0.18:22-10.0.0.1:39380.service: Deactivated successfully. Jan 23 00:59:46.942229 kubelet[1673]: E0123 00:59:46.940602 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:59:46.944619 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 00:59:46.952276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:59:46.953295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:59:46.956637 systemd[1]: kubelet.service: Consumed 15.279s CPU time, 270.9M memory peak. Jan 23 00:59:46.963176 systemd-logind[1547]: Session 3 logged out. Waiting for processes to exit. Jan 23 00:59:46.971726 systemd[1]: Started sshd@3-10.0.0.18:22-10.0.0.1:39386.service - OpenSSH per-connection server daemon (10.0.0.1:39386). Jan 23 00:59:46.977315 systemd-logind[1547]: Removed session 3. Jan 23 00:59:47.788485 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 39386 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:59:47.808298 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:47.837996 systemd-logind[1547]: New session 4 of user core. Jan 23 00:59:47.869450 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 00:59:48.029562 sshd[1732]: Connection closed by 10.0.0.1 port 39386 Jan 23 00:59:48.030638 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:48.072030 systemd[1]: sshd@3-10.0.0.18:22-10.0.0.1:39386.service: Deactivated successfully. Jan 23 00:59:48.076428 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 00:59:48.080162 systemd-logind[1547]: Session 4 logged out. Waiting for processes to exit. Jan 23 00:59:48.087124 systemd[1]: Started sshd@4-10.0.0.18:22-10.0.0.1:39394.service - OpenSSH per-connection server daemon (10.0.0.1:39394). Jan 23 00:59:48.091641 systemd-logind[1547]: Removed session 4. Jan 23 00:59:48.244387 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 39394 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:59:48.247511 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:48.274034 systemd-logind[1547]: New session 5 of user core. Jan 23 00:59:48.296388 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 00:59:48.443604 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 00:59:48.444552 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:59:48.509339 sudo[1742]: pam_unix(sudo:session): session closed for user root Jan 23 00:59:48.520903 sshd[1741]: Connection closed by 10.0.0.1 port 39394 Jan 23 00:59:48.520337 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:48.539352 systemd[1]: sshd@4-10.0.0.18:22-10.0.0.1:39394.service: Deactivated successfully. Jan 23 00:59:48.544293 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 00:59:48.556650 systemd-logind[1547]: Session 5 logged out. Waiting for processes to exit. Jan 23 00:59:48.568228 systemd[1]: Started sshd@5-10.0.0.18:22-10.0.0.1:39402.service - OpenSSH per-connection server daemon (10.0.0.1:39402). Jan 23 00:59:48.570311 systemd-logind[1547]: Removed session 5. Jan 23 00:59:48.851213 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 39402 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:59:48.868663 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:48.896538 systemd-logind[1547]: New session 6 of user core. Jan 23 00:59:48.902224 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 00:59:49.022656 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 00:59:49.025032 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:59:49.112376 sudo[1753]: pam_unix(sudo:session): session closed for user root Jan 23 00:59:49.135222 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 00:59:49.136001 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:59:49.211631 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:59:49.480548 augenrules[1775]: No rules Jan 23 00:59:49.482743 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:59:49.483521 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:59:49.490497 sudo[1752]: pam_unix(sudo:session): session closed for user root Jan 23 00:59:49.505718 sshd[1751]: Connection closed by 10.0.0.1 port 39402 Jan 23 00:59:49.508619 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:49.574303 systemd[1]: sshd@5-10.0.0.18:22-10.0.0.1:39402.service: Deactivated successfully. Jan 23 00:59:49.582224 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 00:59:49.584961 systemd-logind[1547]: Session 6 logged out. Waiting for processes to exit. Jan 23 00:59:49.593534 systemd[1]: Started sshd@6-10.0.0.18:22-10.0.0.1:39404.service - OpenSSH per-connection server daemon (10.0.0.1:39404). Jan 23 00:59:49.598426 systemd-logind[1547]: Removed session 6. Jan 23 00:59:50.016418 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 39404 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:59:50.022493 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:50.058435 systemd-logind[1547]: New session 7 of user core. Jan 23 00:59:50.074696 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 00:59:50.205428 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 00:59:50.207620 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:59:57.112366 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 00:59:57.240287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:59:58.806352 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 00:59:58.850346 (dockerd)[1811]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 01:00:00.799191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:00.873651 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:00:03.132658 kubelet[1821]: E0123 01:00:03.131747 1821 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:00:03.287140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:00:03.291293 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:00:03.293009 systemd[1]: kubelet.service: Consumed 4.312s CPU time, 110.4M memory peak. Jan 23 01:00:06.112528 dockerd[1811]: time="2026-01-23T01:00:06.111357690Z" level=info msg="Starting up" Jan 23 01:00:06.146328 dockerd[1811]: time="2026-01-23T01:00:06.145081252Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 01:00:06.943361 dockerd[1811]: time="2026-01-23T01:00:06.941287504Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 01:00:07.507240 systemd[1]: var-lib-docker-metacopy\x2dcheck873864739-merged.mount: Deactivated successfully. Jan 23 01:00:07.852524 dockerd[1811]: time="2026-01-23T01:00:07.843703113Z" level=info msg="Loading containers: start." Jan 23 01:00:08.037457 kernel: Initializing XFRM netlink socket Jan 23 01:00:10.552455 update_engine[1548]: I20260123 01:00:10.547698 1548 update_attempter.cc:509] Updating boot flags... Jan 23 01:00:13.001315 systemd-networkd[1485]: docker0: Link UP Jan 23 01:00:13.030374 dockerd[1811]: time="2026-01-23T01:00:13.030184023Z" level=info msg="Loading containers: done." Jan 23 01:00:13.196350 dockerd[1811]: time="2026-01-23T01:00:13.196135950Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 01:00:13.196989 dockerd[1811]: time="2026-01-23T01:00:13.196620929Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 01:00:13.197598 dockerd[1811]: time="2026-01-23T01:00:13.197489277Z" level=info msg="Initializing buildkit" Jan 23 01:00:13.244168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 01:00:13.251446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:13.496051 dockerd[1811]: time="2026-01-23T01:00:13.483714537Z" level=info msg="Completed buildkit initialization" Jan 23 01:00:13.640107 dockerd[1811]: time="2026-01-23T01:00:13.634754104Z" level=info msg="Daemon has completed initialization" Jan 23 01:00:13.640107 dockerd[1811]: time="2026-01-23T01:00:13.636272722Z" level=info msg="API listen on /run/docker.sock" Jan 23 01:00:13.641322 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 01:00:14.628627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:14.655689 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:00:15.597304 kubelet[2059]: E0123 01:00:15.596528 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:00:15.612525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:00:15.613028 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:00:15.619576 systemd[1]: kubelet.service: Consumed 1.766s CPU time, 112.3M memory peak. Jan 23 01:00:19.744148 containerd[1567]: time="2026-01-23T01:00:19.743519387Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 01:00:22.703397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2872828146.mount: Deactivated successfully. Jan 23 01:00:25.750684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 01:00:25.766730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:27.414748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:27.487957 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:00:28.366976 kubelet[2124]: E0123 01:00:28.361021 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:00:28.388299 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:00:28.388744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:00:28.390736 systemd[1]: kubelet.service: Consumed 1.518s CPU time, 109.1M memory peak. Jan 23 01:00:36.328426 containerd[1567]: time="2026-01-23T01:00:36.328102064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:36.332696 containerd[1567]: time="2026-01-23T01:00:36.331569765Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 23 01:00:36.343403 containerd[1567]: time="2026-01-23T01:00:36.343103821Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:36.391046 containerd[1567]: time="2026-01-23T01:00:36.389725136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:36.397754 containerd[1567]: time="2026-01-23T01:00:36.397478552Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 16.653759854s" Jan 23 01:00:36.397754 containerd[1567]: time="2026-01-23T01:00:36.397744004Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 01:00:36.411130 containerd[1567]: time="2026-01-23T01:00:36.409076741Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 01:00:38.508468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 01:00:38.526139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:40.591608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:40.626357 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:00:41.498194 kubelet[2160]: E0123 01:00:41.497623 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:00:41.507376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:00:41.508485 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:00:41.510116 systemd[1]: kubelet.service: Consumed 2.016s CPU time, 110.7M memory peak. Jan 23 01:00:49.770967 containerd[1567]: time="2026-01-23T01:00:49.770096299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:49.774158 containerd[1567]: time="2026-01-23T01:00:49.774075279Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 23 01:00:49.780126 containerd[1567]: time="2026-01-23T01:00:49.780021081Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:49.815948 containerd[1567]: time="2026-01-23T01:00:49.815183810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:49.988681 containerd[1567]: time="2026-01-23T01:00:49.968012860Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 13.558878642s" Jan 23 01:00:49.994229 containerd[1567]: time="2026-01-23T01:00:49.989436417Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 01:00:50.016086 containerd[1567]: time="2026-01-23T01:00:50.014276590Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 01:00:51.772964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 01:00:51.792559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:52.418546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:52.445178 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:00:53.106190 kubelet[2181]: E0123 01:00:53.106062 2181 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:00:53.115257 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:00:53.116308 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:00:53.117698 systemd[1]: kubelet.service: Consumed 1.222s CPU time, 110.9M memory peak. Jan 23 01:00:57.906961 containerd[1567]: time="2026-01-23T01:00:57.904594595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:57.926520 containerd[1567]: time="2026-01-23T01:00:57.926043314Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 23 01:00:57.936544 containerd[1567]: time="2026-01-23T01:00:57.936206036Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:58.248587 containerd[1567]: time="2026-01-23T01:00:58.247625544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:58.404287 containerd[1567]: time="2026-01-23T01:00:58.399528898Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 8.385084469s" Jan 23 01:00:58.404287 containerd[1567]: time="2026-01-23T01:00:58.402075397Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 01:00:58.413896 containerd[1567]: time="2026-01-23T01:00:58.413336941Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 01:01:03.254629 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 23 01:01:03.284428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:01:05.023067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:01:05.046718 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:01:06.264458 kubelet[2201]: E0123 01:01:06.263709 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:01:06.276315 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:01:06.277337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:01:06.292672 systemd[1]: kubelet.service: Consumed 2.442s CPU time, 111M memory peak. Jan 23 01:01:06.358601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3440689373.mount: Deactivated successfully. Jan 23 01:01:13.983334 containerd[1567]: time="2026-01-23T01:01:13.979499989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:13.997325 containerd[1567]: time="2026-01-23T01:01:13.990459942Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 23 01:01:13.997325 containerd[1567]: time="2026-01-23T01:01:13.992370430Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:14.015638 containerd[1567]: time="2026-01-23T01:01:14.009673166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:14.015638 containerd[1567]: time="2026-01-23T01:01:14.013410013Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 15.599935222s" Jan 23 01:01:14.015638 containerd[1567]: time="2026-01-23T01:01:14.014493483Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 01:01:14.021262 containerd[1567]: time="2026-01-23T01:01:14.021232806Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 01:01:15.507743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261346079.mount: Deactivated successfully. Jan 23 01:01:16.499514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 23 01:01:16.507300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:01:17.749182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:01:17.919338 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:01:18.667003 kubelet[2233]: E0123 01:01:18.666466 2233 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:01:18.686693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:01:18.687321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:01:18.690989 systemd[1]: kubelet.service: Consumed 1.419s CPU time, 110.9M memory peak. Jan 23 01:01:27.361281 containerd[1567]: time="2026-01-23T01:01:27.358617293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:27.363614 containerd[1567]: time="2026-01-23T01:01:27.362348467Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 23 01:01:27.366582 containerd[1567]: time="2026-01-23T01:01:27.366373174Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:27.373240 containerd[1567]: time="2026-01-23T01:01:27.373196675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:27.377581 containerd[1567]: time="2026-01-23T01:01:27.377538269Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 13.355997884s" Jan 23 01:01:27.378530 containerd[1567]: time="2026-01-23T01:01:27.378502086Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 01:01:27.402270 containerd[1567]: time="2026-01-23T01:01:27.402237349Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 01:01:28.747307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 23 01:01:28.754102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:01:28.759302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1906589459.mount: Deactivated successfully. Jan 23 01:01:28.801066 containerd[1567]: time="2026-01-23T01:01:28.800351225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:01:28.803630 containerd[1567]: time="2026-01-23T01:01:28.803126007Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 01:01:28.808016 containerd[1567]: time="2026-01-23T01:01:28.806106964Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:01:28.815346 containerd[1567]: time="2026-01-23T01:01:28.814992330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:01:28.818352 containerd[1567]: time="2026-01-23T01:01:28.818208748Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.415689613s" Jan 23 01:01:28.818352 containerd[1567]: time="2026-01-23T01:01:28.818248091Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 01:01:28.826543 containerd[1567]: time="2026-01-23T01:01:28.826308952Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 01:01:29.964548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:01:30.016143 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:01:30.056066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3818940082.mount: Deactivated successfully. Jan 23 01:01:31.751368 kubelet[2295]: E0123 01:01:31.751222 2295 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:01:31.770151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:01:31.773568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:01:31.780560 systemd[1]: kubelet.service: Consumed 2.220s CPU time, 109.7M memory peak. Jan 23 01:01:42.026543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 23 01:01:42.074404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:01:43.469556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:01:43.558109 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:01:44.295340 kubelet[2362]: E0123 01:01:44.294758 2362 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:01:44.311405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:01:44.312226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:01:44.313642 systemd[1]: kubelet.service: Consumed 1.358s CPU time, 108.4M memory peak. Jan 23 01:01:53.076618 containerd[1567]: time="2026-01-23T01:01:53.071539213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:53.088967 containerd[1567]: time="2026-01-23T01:01:53.088251201Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 23 01:01:53.102705 containerd[1567]: time="2026-01-23T01:01:53.098317700Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:53.113519 containerd[1567]: time="2026-01-23T01:01:53.113324669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:53.121518 containerd[1567]: time="2026-01-23T01:01:53.121212477Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 24.294754097s" Jan 23 01:01:53.121518 containerd[1567]: time="2026-01-23T01:01:53.121269888Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 01:01:54.521110 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 23 01:01:54.555380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:01:55.772167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:01:55.823072 (kubelet)[2404]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:01:56.379199 kubelet[2404]: E0123 01:01:56.371450 2404 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:01:56.394464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:01:56.395223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:01:56.396268 systemd[1]: kubelet.service: Consumed 986ms CPU time, 110.7M memory peak. Jan 23 01:02:06.811497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 23 01:02:07.041633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:02:09.164257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:02:09.211703 (kubelet)[2422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:02:09.704139 kubelet[2422]: E0123 01:02:09.703724 2422 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:02:09.733710 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:02:09.735312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:02:09.743109 systemd[1]: kubelet.service: Consumed 1.479s CPU time, 110.6M memory peak. Jan 23 01:02:11.684137 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:02:11.684474 systemd[1]: kubelet.service: Consumed 1.479s CPU time, 110.6M memory peak. Jan 23 01:02:11.700127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:02:11.925522 systemd[1]: Reload requested from client PID 2439 ('systemctl') (unit session-7.scope)... Jan 23 01:02:11.926031 systemd[1]: Reloading... Jan 23 01:02:12.316394 zram_generator::config[2482]: No configuration found. Jan 23 01:02:13.542737 systemd[1]: Reloading finished in 1615 ms. Jan 23 01:02:13.756268 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:02:13.756591 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:02:13.759106 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:02:13.759171 systemd[1]: kubelet.service: Consumed 408ms CPU time, 98.2M memory peak. Jan 23 01:02:13.766397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:02:14.497185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:02:14.529134 (kubelet)[2530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:02:15.238228 kubelet[2530]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:02:15.238228 kubelet[2530]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:02:15.238228 kubelet[2530]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:02:15.240582 kubelet[2530]: I0123 01:02:15.238274 2530 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:02:15.928377 kubelet[2530]: I0123 01:02:15.927170 2530 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 01:02:15.928377 kubelet[2530]: I0123 01:02:15.927725 2530 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:02:15.940199 kubelet[2530]: I0123 01:02:15.939573 2530 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:02:16.325501 kubelet[2530]: E0123 01:02:16.324746 2530 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:02:16.330090 kubelet[2530]: I0123 01:02:16.329664 2530 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:02:16.391194 kubelet[2530]: I0123 01:02:16.390997 2530 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:02:16.814182 kubelet[2530]: I0123 01:02:16.813473 2530 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:02:16.822085 kubelet[2530]: I0123 01:02:16.817141 2530 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:02:16.822085 kubelet[2530]: I0123 01:02:16.819351 2530 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:02:16.822085 kubelet[2530]: I0123 01:02:16.820051 2530 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:02:16.822085 kubelet[2530]: I0123 01:02:16.820306 2530 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 01:02:16.822085 kubelet[2530]: I0123 01:02:16.821377 2530 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:02:16.844273 kubelet[2530]: I0123 01:02:16.843006 2530 kubelet.go:480] "Attempting to sync node with API server" Jan 23 01:02:16.844273 kubelet[2530]: I0123 01:02:16.843531 2530 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:02:16.844273 kubelet[2530]: I0123 01:02:16.843711 2530 kubelet.go:386] "Adding apiserver pod source" Jan 23 01:02:16.844273 kubelet[2530]: I0123 01:02:16.844079 2530 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:02:16.864058 kubelet[2530]: E0123 01:02:16.861639 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:02:16.866594 kubelet[2530]: E0123 01:02:16.866469 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:02:16.894237 kubelet[2530]: I0123 01:02:16.894065 2530 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:02:16.896128 kubelet[2530]: I0123 01:02:16.895612 2530 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:02:16.901586 kubelet[2530]: W0123 01:02:16.901407 2530 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:02:16.927150 kubelet[2530]: I0123 01:02:16.925314 2530 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:02:16.927150 kubelet[2530]: I0123 01:02:16.926248 2530 server.go:1289] "Started kubelet" Jan 23 01:02:16.995593 kubelet[2530]: I0123 01:02:16.937415 2530 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:02:16.995593 kubelet[2530]: I0123 01:02:16.938517 2530 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:02:16.998482 kubelet[2530]: I0123 01:02:16.996509 2530 server.go:317] "Adding debug handlers to kubelet server" Jan 23 01:02:17.043024 kubelet[2530]: I0123 01:02:17.042436 2530 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:02:17.064504 kubelet[2530]: I0123 01:02:17.063907 2530 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:02:17.073240 kubelet[2530]: I0123 01:02:17.070037 2530 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:02:17.075210 kubelet[2530]: E0123 01:02:17.042635 2530 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.18:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.18:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d367c7ce2fd00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 01:02:16.925568256 +0000 UTC m=+2.322934380,LastTimestamp:2026-01-23 01:02:16.925568256 +0000 UTC m=+2.322934380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 01:02:17.090587 kubelet[2530]: I0123 01:02:17.089211 2530 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:02:17.102174 kubelet[2530]: I0123 01:02:17.098276 2530 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:02:17.102174 kubelet[2530]: E0123 01:02:17.099727 2530 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:02:17.106051 kubelet[2530]: E0123 01:02:17.103355 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:02:17.106051 kubelet[2530]: I0123 01:02:17.103488 2530 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:02:17.106599 kubelet[2530]: E0123 01:02:17.106157 2530 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="200ms" Jan 23 01:02:17.116395 kubelet[2530]: I0123 01:02:17.115292 2530 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:02:17.197710 kubelet[2530]: E0123 01:02:17.197423 2530 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:02:17.200301 kubelet[2530]: I0123 01:02:17.200117 2530 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:02:17.200301 kubelet[2530]: I0123 01:02:17.200227 2530 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:02:17.200746 kubelet[2530]: E0123 01:02:17.200718 2530 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:02:17.304264 kubelet[2530]: E0123 01:02:17.304167 2530 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:02:17.323049 kubelet[2530]: E0123 01:02:17.320667 2530 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="400ms" Jan 23 01:02:17.346293 kubelet[2530]: I0123 01:02:17.345478 2530 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:02:17.349227 kubelet[2530]: I0123 01:02:17.347568 2530 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:02:17.349227 kubelet[2530]: I0123 01:02:17.347602 2530 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:02:17.361889 kubelet[2530]: I0123 01:02:17.361600 2530 policy_none.go:49] "None policy: Start" Jan 23 01:02:17.361889 kubelet[2530]: I0123 01:02:17.361727 2530 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:02:17.364153 kubelet[2530]: I0123 01:02:17.362423 2530 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:02:17.387109 kubelet[2530]: I0123 01:02:17.386363 2530 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 01:02:17.395684 kubelet[2530]: I0123 01:02:17.395571 2530 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 01:02:17.397291 kubelet[2530]: I0123 01:02:17.397181 2530 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 01:02:17.397612 kubelet[2530]: I0123 01:02:17.397516 2530 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:02:17.397612 kubelet[2530]: I0123 01:02:17.397603 2530 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 01:02:17.398154 kubelet[2530]: E0123 01:02:17.397749 2530 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:02:17.404023 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:02:17.404669 kubelet[2530]: E0123 01:02:17.404227 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:02:17.406487 kubelet[2530]: E0123 01:02:17.406443 2530 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:02:17.430034 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:02:17.439170 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:02:17.461526 kubelet[2530]: E0123 01:02:17.460694 2530 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:02:17.461526 kubelet[2530]: I0123 01:02:17.461344 2530 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:02:17.461526 kubelet[2530]: I0123 01:02:17.461417 2530 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:02:17.464439 kubelet[2530]: I0123 01:02:17.464302 2530 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:02:17.478696 kubelet[2530]: E0123 01:02:17.477523 2530 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:02:17.478696 kubelet[2530]: E0123 01:02:17.478281 2530 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 01:02:17.506567 kubelet[2530]: I0123 01:02:17.506436 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/053c6872a110a33ba6f2df8891206ee5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"053c6872a110a33ba6f2df8891206ee5\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:02:17.507617 kubelet[2530]: I0123 01:02:17.507488 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/053c6872a110a33ba6f2df8891206ee5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"053c6872a110a33ba6f2df8891206ee5\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:02:17.508134 kubelet[2530]: I0123 01:02:17.508032 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/053c6872a110a33ba6f2df8891206ee5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"053c6872a110a33ba6f2df8891206ee5\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:02:17.531458 systemd[1]: Created slice kubepods-burstable-pod053c6872a110a33ba6f2df8891206ee5.slice - libcontainer container kubepods-burstable-pod053c6872a110a33ba6f2df8891206ee5.slice. Jan 23 01:02:17.545439 kubelet[2530]: E0123 01:02:17.545310 2530 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:02:17.636248 kubelet[2530]: I0123 01:02:17.634276 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:02:17.636248 kubelet[2530]: I0123 01:02:17.634557 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:02:17.636248 kubelet[2530]: I0123 01:02:17.634586 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:02:17.636248 kubelet[2530]: I0123 01:02:17.634626 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:02:17.636248 kubelet[2530]: I0123 01:02:17.634684 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:02:17.637150 kubelet[2530]: I0123 01:02:17.634709 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 01:02:17.637150 kubelet[2530]: I0123 01:02:17.637043 2530 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:02:17.637687 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 23 01:02:17.639508 kubelet[2530]: E0123 01:02:17.639126 2530 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jan 23 01:02:17.722186 kubelet[2530]: E0123 01:02:17.721531 2530 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:02:17.724292 kubelet[2530]: E0123 01:02:17.724103 2530 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="800ms" Jan 23 01:02:17.739641 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 23 01:02:17.777635 kubelet[2530]: E0123 01:02:17.777120 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:02:17.790936 kubelet[2530]: E0123 01:02:17.790672 2530 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:02:17.803536 kubelet[2530]: E0123 01:02:17.802456 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:17.849041 kubelet[2530]: E0123 01:02:17.848504 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:17.899405 containerd[1567]: time="2026-01-23T01:02:17.894543303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 23 01:02:17.930362 containerd[1567]: time="2026-01-23T01:02:17.922391398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:053c6872a110a33ba6f2df8891206ee5,Namespace:kube-system,Attempt:0,}" Jan 23 01:02:17.936952 kubelet[2530]: I0123 01:02:17.936683 2530 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:02:17.951537 kubelet[2530]: E0123 01:02:17.948396 2530 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jan 23 01:02:18.034156 kubelet[2530]: E0123 01:02:18.032920 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:18.048340 containerd[1567]: time="2026-01-23T01:02:18.047925368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 23 01:02:18.050375 kubelet[2530]: E0123 01:02:18.050224 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:02:18.751000 kubelet[2530]: E0123 01:02:18.746669 2530 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="1.6s" Jan 23 01:02:18.751000 kubelet[2530]: E0123 01:02:18.749921 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:02:18.755223 kubelet[2530]: E0123 01:02:18.753246 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:02:18.775336 kubelet[2530]: I0123 01:02:18.768455 2530 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:02:18.775336 kubelet[2530]: E0123 01:02:18.771454 2530 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jan 23 01:02:18.797732 kubelet[2530]: E0123 01:02:18.797275 2530 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:02:18.810304 containerd[1567]: time="2026-01-23T01:02:18.809158592Z" level=info msg="connecting to shim 30da438abf3dc33a4fd7453526e15258c1a94f8ff7121750e84b72ec1471c1c5" address="unix:///run/containerd/s/28c5fc82a494fdc38b6fdcd2f54971a9f8510f9f777f41461a106ff78a8226bf" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:02:18.820502 containerd[1567]: time="2026-01-23T01:02:18.818535251Z" level=info msg="connecting to shim 08539a6df246c4af8486383c3c70a801ebcd709f61b512b76c9e955f1866ca5f" address="unix:///run/containerd/s/13dbad35d20a84db8eeb7067fe4cf57e1c805db447cf8ba8dba05635e3884957" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:02:19.236357 containerd[1567]: time="2026-01-23T01:02:19.210296441Z" level=info msg="connecting to shim 5cb66d823f37deaa9c9698cb19aeb57448cff62f3b1c24cf91cfc755e8cf47e3" address="unix:///run/containerd/s/9ad0c747677d8059b86e138b51b32c656785bfd760ecc3df49213be43bbd40da" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:02:19.615011 kubelet[2530]: E0123 01:02:19.614592 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:02:19.816741 kubelet[2530]: I0123 01:02:19.816533 2530 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:02:19.821996 kubelet[2530]: E0123 01:02:19.821588 2530 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jan 23 01:02:20.048248 systemd[1]: Started cri-containerd-30da438abf3dc33a4fd7453526e15258c1a94f8ff7121750e84b72ec1471c1c5.scope - libcontainer container 30da438abf3dc33a4fd7453526e15258c1a94f8ff7121750e84b72ec1471c1c5. Jan 23 01:02:20.399139 kubelet[2530]: E0123 01:02:20.369600 2530 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="3.2s" Jan 23 01:02:20.509046 kubelet[2530]: E0123 01:02:20.507625 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:02:20.562988 systemd[1]: Started cri-containerd-5cb66d823f37deaa9c9698cb19aeb57448cff62f3b1c24cf91cfc755e8cf47e3.scope - libcontainer container 5cb66d823f37deaa9c9698cb19aeb57448cff62f3b1c24cf91cfc755e8cf47e3. Jan 23 01:02:20.650336 systemd[1]: Started cri-containerd-08539a6df246c4af8486383c3c70a801ebcd709f61b512b76c9e955f1866ca5f.scope - libcontainer container 08539a6df246c4af8486383c3c70a801ebcd709f61b512b76c9e955f1866ca5f. Jan 23 01:02:21.054202 kubelet[2530]: E0123 01:02:21.047454 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:02:21.239210 kubelet[2530]: E0123 01:02:21.238758 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:02:21.296481 containerd[1567]: time="2026-01-23T01:02:21.296100802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cb66d823f37deaa9c9698cb19aeb57448cff62f3b1c24cf91cfc755e8cf47e3\"" Jan 23 01:02:21.328737 kubelet[2530]: E0123 01:02:21.328219 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:21.354452 containerd[1567]: time="2026-01-23T01:02:21.354327672Z" level=info msg="CreateContainer within sandbox \"5cb66d823f37deaa9c9698cb19aeb57448cff62f3b1c24cf91cfc755e8cf47e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 01:02:21.408107 containerd[1567]: time="2026-01-23T01:02:21.405335542Z" level=info msg="Container 716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:02:21.422009 containerd[1567]: time="2026-01-23T01:02:21.421949148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"30da438abf3dc33a4fd7453526e15258c1a94f8ff7121750e84b72ec1471c1c5\"" Jan 23 01:02:21.435013 kubelet[2530]: I0123 01:02:21.434565 2530 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:02:21.437062 kubelet[2530]: E0123 01:02:21.436094 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:21.443745 kubelet[2530]: E0123 01:02:21.443712 2530 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jan 23 01:02:21.458429 containerd[1567]: time="2026-01-23T01:02:21.458098021Z" level=info msg="CreateContainer within sandbox \"5cb66d823f37deaa9c9698cb19aeb57448cff62f3b1c24cf91cfc755e8cf47e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1\"" Jan 23 01:02:21.462044 containerd[1567]: time="2026-01-23T01:02:21.461682924Z" level=info msg="CreateContainer within sandbox \"30da438abf3dc33a4fd7453526e15258c1a94f8ff7121750e84b72ec1471c1c5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 01:02:21.462749 containerd[1567]: time="2026-01-23T01:02:21.462722893Z" level=info msg="StartContainer for \"716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1\"" Jan 23 01:02:21.465713 containerd[1567]: time="2026-01-23T01:02:21.465583057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:053c6872a110a33ba6f2df8891206ee5,Namespace:kube-system,Attempt:0,} returns sandbox id \"08539a6df246c4af8486383c3c70a801ebcd709f61b512b76c9e955f1866ca5f\"" Jan 23 01:02:21.468324 kubelet[2530]: E0123 01:02:21.467997 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:21.469327 containerd[1567]: time="2026-01-23T01:02:21.469041301Z" level=info msg="connecting to shim 716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1" address="unix:///run/containerd/s/9ad0c747677d8059b86e138b51b32c656785bfd760ecc3df49213be43bbd40da" protocol=ttrpc version=3 Jan 23 01:02:21.501053 containerd[1567]: time="2026-01-23T01:02:21.496069745Z" level=info msg="CreateContainer within sandbox \"08539a6df246c4af8486383c3c70a801ebcd709f61b512b76c9e955f1866ca5f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 01:02:21.503719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3663277062.mount: Deactivated successfully. Jan 23 01:02:21.508380 containerd[1567]: time="2026-01-23T01:02:21.507924805Z" level=info msg="Container 9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:02:21.542391 containerd[1567]: time="2026-01-23T01:02:21.540713407Z" level=info msg="Container a8352e5e50011dcb90427ef0827e66225c9cd3d0c7b719166121dae8605de346: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:02:21.561419 containerd[1567]: time="2026-01-23T01:02:21.558099891Z" level=info msg="CreateContainer within sandbox \"30da438abf3dc33a4fd7453526e15258c1a94f8ff7121750e84b72ec1471c1c5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398\"" Jan 23 01:02:21.564633 containerd[1567]: time="2026-01-23T01:02:21.564601205Z" level=info msg="StartContainer for \"9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398\"" Jan 23 01:02:21.569390 containerd[1567]: time="2026-01-23T01:02:21.569053374Z" level=info msg="connecting to shim 9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398" address="unix:///run/containerd/s/28c5fc82a494fdc38b6fdcd2f54971a9f8510f9f777f41461a106ff78a8226bf" protocol=ttrpc version=3 Jan 23 01:02:21.572617 systemd[1]: Started cri-containerd-716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1.scope - libcontainer container 716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1. Jan 23 01:02:21.583459 containerd[1567]: time="2026-01-23T01:02:21.581583720Z" level=info msg="CreateContainer within sandbox \"08539a6df246c4af8486383c3c70a801ebcd709f61b512b76c9e955f1866ca5f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a8352e5e50011dcb90427ef0827e66225c9cd3d0c7b719166121dae8605de346\"" Jan 23 01:02:21.583708 containerd[1567]: time="2026-01-23T01:02:21.583674486Z" level=info msg="StartContainer for \"a8352e5e50011dcb90427ef0827e66225c9cd3d0c7b719166121dae8605de346\"" Jan 23 01:02:21.586238 containerd[1567]: time="2026-01-23T01:02:21.586160479Z" level=info msg="connecting to shim a8352e5e50011dcb90427ef0827e66225c9cd3d0c7b719166121dae8605de346" address="unix:///run/containerd/s/13dbad35d20a84db8eeb7067fe4cf57e1c805db447cf8ba8dba05635e3884957" protocol=ttrpc version=3 Jan 23 01:02:21.685396 systemd[1]: Started cri-containerd-9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398.scope - libcontainer container 9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398. Jan 23 01:02:21.722923 systemd[1]: Started cri-containerd-a8352e5e50011dcb90427ef0827e66225c9cd3d0c7b719166121dae8605de346.scope - libcontainer container a8352e5e50011dcb90427ef0827e66225c9cd3d0c7b719166121dae8605de346. Jan 23 01:02:21.816960 containerd[1567]: time="2026-01-23T01:02:21.816910767Z" level=info msg="StartContainer for \"716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1\" returns successfully" Jan 23 01:02:21.892643 containerd[1567]: time="2026-01-23T01:02:21.891365669Z" level=info msg="StartContainer for \"9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398\" returns successfully" Jan 23 01:02:21.963037 containerd[1567]: time="2026-01-23T01:02:21.962663001Z" level=info msg="StartContainer for \"a8352e5e50011dcb90427ef0827e66225c9cd3d0c7b719166121dae8605de346\" returns successfully" Jan 23 01:02:22.110267 kubelet[2530]: E0123 01:02:22.110228 2530 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:02:22.112633 kubelet[2530]: E0123 01:02:22.112540 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:22.120928 kubelet[2530]: E0123 01:02:22.120588 2530 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:02:22.120928 kubelet[2530]: E0123 01:02:22.120726 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:22.137023 kubelet[2530]: E0123 01:02:22.135702 2530 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:02:22.137023 kubelet[2530]: E0123 01:02:22.136157 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:24.346385 kubelet[2530]: E0123 01:02:24.346063 2530 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:02:24.360645 kubelet[2530]: E0123 01:02:24.359937 2530 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:02:24.360990 kubelet[2530]: E0123 01:02:24.360681 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:24.366747 kubelet[2530]: E0123 01:02:24.365294 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:25.006114 kubelet[2530]: I0123 01:02:25.004461 2530 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:02:25.364276 kubelet[2530]: E0123 01:02:25.362928 2530 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:02:25.364276 kubelet[2530]: E0123 01:02:25.363375 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:27.511218 kubelet[2530]: E0123 01:02:27.510293 2530 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 01:02:27.831450 kubelet[2530]: E0123 01:02:27.830487 2530 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:02:27.836165 kubelet[2530]: E0123 01:02:27.836137 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:31.903209 kubelet[2530]: E0123 01:02:31.902649 2530 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:02:32.216565 kubelet[2530]: E0123 01:02:32.148404 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:33.091698 kubelet[2530]: E0123 01:02:33.044353 2530 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:02:33.273331 kubelet[2530]: E0123 01:02:33.271160 2530 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:02:33.273331 kubelet[2530]: E0123 01:02:33.272524 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:33.658609 kubelet[2530]: E0123 01:02:33.652737 2530 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 23 01:02:34.553692 kubelet[2530]: E0123 01:02:34.550941 2530 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.18:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188d367c7ce2fd00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 01:02:16.925568256 +0000 UTC m=+2.322934380,LastTimestamp:2026-01-23 01:02:16.925568256 +0000 UTC m=+2.322934380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 01:02:35.011070 kubelet[2530]: E0123 01:02:35.010199 2530 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 23 01:02:35.036664 kubelet[2530]: E0123 01:02:35.035009 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:02:35.143470 kubelet[2530]: E0123 01:02:35.140266 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:02:35.143470 kubelet[2530]: E0123 01:02:35.140551 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:02:36.657916 kubelet[2530]: E0123 01:02:36.656740 2530 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:02:37.517105 kubelet[2530]: E0123 01:02:37.515498 2530 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 01:02:41.559087 kubelet[2530]: I0123 01:02:41.557758 2530 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:02:45.719069 kubelet[2530]: E0123 01:02:45.717074 2530 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 23 01:02:45.797333 kubelet[2530]: I0123 01:02:45.797291 2530 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 01:02:45.799187 kubelet[2530]: E0123 01:02:45.799161 2530 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 23 01:02:45.800941 kubelet[2530]: E0123 01:02:45.797550 2530 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188d367c7ce2fd00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 01:02:16.925568256 +0000 UTC m=+2.322934380,LastTimestamp:2026-01-23 01:02:16.925568256 +0000 UTC m=+2.322934380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 01:02:45.871031 kubelet[2530]: I0123 01:02:45.868348 2530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 01:02:45.900587 kubelet[2530]: I0123 01:02:45.900461 2530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 01:02:45.951936 kubelet[2530]: E0123 01:02:45.950690 2530 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 23 01:02:45.951936 kubelet[2530]: E0123 01:02:45.951364 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:45.951936 kubelet[2530]: E0123 01:02:45.951568 2530 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 23 01:02:45.951936 kubelet[2530]: I0123 01:02:45.951588 2530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 01:02:45.960093 kubelet[2530]: E0123 01:02:45.959537 2530 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 23 01:02:45.960093 kubelet[2530]: I0123 01:02:45.959656 2530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 01:02:45.968601 kubelet[2530]: E0123 01:02:45.968560 2530 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 23 01:02:46.233093 kubelet[2530]: I0123 01:02:46.232142 2530 apiserver.go:52] "Watching apiserver" Jan 23 01:02:46.301118 kubelet[2530]: I0123 01:02:46.299601 2530 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:02:54.713138 systemd[1]: Reload requested from client PID 2817 ('systemctl') (unit session-7.scope)... Jan 23 01:02:54.713172 systemd[1]: Reloading... Jan 23 01:02:55.061139 zram_generator::config[2866]: No configuration found. Jan 23 01:02:55.704396 systemd[1]: Reloading finished in 990 ms. Jan 23 01:02:55.802690 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:02:55.891227 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:02:55.892401 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:02:55.894368 systemd[1]: kubelet.service: Consumed 11.399s CPU time, 138.6M memory peak. Jan 23 01:02:55.918336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:02:56.578474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:02:56.617328 (kubelet)[2904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:02:56.975656 kubelet[2904]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:02:56.975656 kubelet[2904]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:02:56.975656 kubelet[2904]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:02:56.975656 kubelet[2904]: I0123 01:02:56.974121 2904 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:02:57.025935 kubelet[2904]: I0123 01:02:57.025122 2904 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 01:02:57.025935 kubelet[2904]: I0123 01:02:57.025254 2904 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:02:57.025935 kubelet[2904]: I0123 01:02:57.025641 2904 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:02:57.043408 kubelet[2904]: I0123 01:02:57.043368 2904 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 01:02:57.092926 kubelet[2904]: I0123 01:02:57.091350 2904 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:02:57.134751 kubelet[2904]: I0123 01:02:57.134488 2904 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:02:57.216000 kubelet[2904]: I0123 01:02:57.215546 2904 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:02:57.217023 kubelet[2904]: I0123 01:02:57.216675 2904 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:02:57.217392 kubelet[2904]: I0123 01:02:57.216719 2904 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:02:57.222902 kubelet[2904]: I0123 01:02:57.220136 2904 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:02:57.222902 kubelet[2904]: I0123 01:02:57.220163 2904 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 01:02:57.222902 kubelet[2904]: I0123 01:02:57.220339 2904 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:02:57.222902 kubelet[2904]: I0123 01:02:57.221083 2904 kubelet.go:480] "Attempting to sync node with API server" Jan 23 01:02:57.222902 kubelet[2904]: I0123 01:02:57.221103 2904 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:02:57.222902 kubelet[2904]: I0123 01:02:57.221141 2904 kubelet.go:386] "Adding apiserver pod source" Jan 23 01:02:57.222902 kubelet[2904]: I0123 01:02:57.221302 2904 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:02:57.247035 kubelet[2904]: I0123 01:02:57.242344 2904 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:02:57.247035 kubelet[2904]: I0123 01:02:57.245152 2904 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:02:57.271473 kubelet[2904]: I0123 01:02:57.271444 2904 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:02:57.271622 kubelet[2904]: I0123 01:02:57.271609 2904 server.go:1289] "Started kubelet" Jan 23 01:02:57.285576 kubelet[2904]: I0123 01:02:57.272289 2904 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:02:57.291520 kubelet[2904]: I0123 01:02:57.291321 2904 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:02:57.294405 kubelet[2904]: I0123 01:02:57.293700 2904 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:02:57.304455 kubelet[2904]: I0123 01:02:57.302756 2904 server.go:317] "Adding debug handlers to kubelet server" Jan 23 01:02:57.306063 kubelet[2904]: I0123 01:02:57.305677 2904 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:02:57.309945 kubelet[2904]: I0123 01:02:57.307423 2904 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:02:57.311148 kubelet[2904]: I0123 01:02:57.310623 2904 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:02:57.317114 kubelet[2904]: I0123 01:02:57.316040 2904 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:02:57.325584 kubelet[2904]: E0123 01:02:57.317669 2904 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:02:57.325584 kubelet[2904]: I0123 01:02:57.324613 2904 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:02:57.325584 kubelet[2904]: I0123 01:02:57.324749 2904 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:02:57.333396 kubelet[2904]: I0123 01:02:57.333343 2904 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:02:57.347869 kubelet[2904]: I0123 01:02:57.347111 2904 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:02:57.534407 kubelet[2904]: I0123 01:02:57.533116 2904 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 01:02:57.560682 kubelet[2904]: I0123 01:02:57.560405 2904 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 01:02:57.560682 kubelet[2904]: I0123 01:02:57.560489 2904 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 01:02:57.560682 kubelet[2904]: I0123 01:02:57.560516 2904 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:02:57.560682 kubelet[2904]: I0123 01:02:57.560527 2904 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 01:02:57.560682 kubelet[2904]: E0123 01:02:57.560642 2904 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:02:57.661515 kubelet[2904]: E0123 01:02:57.660934 2904 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 01:02:57.694315 kubelet[2904]: I0123 01:02:57.691928 2904 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:02:57.694315 kubelet[2904]: I0123 01:02:57.691953 2904 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:02:57.694315 kubelet[2904]: I0123 01:02:57.691979 2904 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:02:57.694950 kubelet[2904]: I0123 01:02:57.694559 2904 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 01:02:57.694950 kubelet[2904]: I0123 01:02:57.694577 2904 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 01:02:57.694950 kubelet[2904]: I0123 01:02:57.694604 2904 policy_none.go:49] "None policy: Start" Jan 23 01:02:57.694950 kubelet[2904]: I0123 01:02:57.694618 2904 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:02:57.694950 kubelet[2904]: I0123 01:02:57.694633 2904 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:02:57.695130 kubelet[2904]: I0123 01:02:57.695021 2904 state_mem.go:75] "Updated machine memory state" Jan 23 01:02:57.718638 kubelet[2904]: E0123 01:02:57.718418 2904 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:02:57.724032 kubelet[2904]: I0123 01:02:57.721481 2904 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:02:57.724032 kubelet[2904]: I0123 01:02:57.721592 2904 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:02:57.724032 kubelet[2904]: I0123 01:02:57.722548 2904 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:02:57.739013 kubelet[2904]: E0123 01:02:57.736507 2904 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:02:57.891453 kubelet[2904]: I0123 01:02:57.890441 2904 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 01:02:57.900596 kubelet[2904]: I0123 01:02:57.892529 2904 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 01:02:57.900596 kubelet[2904]: I0123 01:02:57.890454 2904 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 01:02:57.913069 kubelet[2904]: I0123 01:02:57.911190 2904 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:02:57.946071 kubelet[2904]: I0123 01:02:57.942666 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:02:57.946071 kubelet[2904]: I0123 01:02:57.942725 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:02:57.949638 kubelet[2904]: I0123 01:02:57.948030 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/053c6872a110a33ba6f2df8891206ee5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"053c6872a110a33ba6f2df8891206ee5\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:02:57.949638 kubelet[2904]: I0123 01:02:57.948074 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/053c6872a110a33ba6f2df8891206ee5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"053c6872a110a33ba6f2df8891206ee5\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:02:57.949638 kubelet[2904]: I0123 01:02:57.948099 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/053c6872a110a33ba6f2df8891206ee5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"053c6872a110a33ba6f2df8891206ee5\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:02:57.949638 kubelet[2904]: I0123 01:02:57.948326 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:02:57.949638 kubelet[2904]: I0123 01:02:57.948360 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:02:57.952055 kubelet[2904]: I0123 01:02:57.948386 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:02:57.952055 kubelet[2904]: I0123 01:02:57.948410 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 01:02:57.986169 kubelet[2904]: I0123 01:02:57.985963 2904 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 01:02:57.986993 kubelet[2904]: I0123 01:02:57.986344 2904 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 01:02:58.242586 kubelet[2904]: I0123 01:02:58.242051 2904 apiserver.go:52] "Watching apiserver" Jan 23 01:02:58.250652 kubelet[2904]: E0123 01:02:58.248205 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:58.280235 kubelet[2904]: E0123 01:02:58.278979 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:58.280235 kubelet[2904]: E0123 01:02:58.279193 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:58.425172 kubelet[2904]: I0123 01:02:58.425132 2904 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:02:58.837719 kubelet[2904]: I0123 01:02:58.837485 2904 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 01:02:58.857416 kubelet[2904]: I0123 01:02:58.855263 2904 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 01:02:58.928548 kubelet[2904]: E0123 01:02:58.875106 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:59.427548 kubelet[2904]: E0123 01:02:59.427308 2904 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 23 01:02:59.436214 kubelet[2904]: E0123 01:02:59.435650 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:01.010130 kubelet[2904]: E0123 01:03:00.989217 2904 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.252s" Jan 23 01:03:01.012445 kubelet[2904]: E0123 01:03:01.010393 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:01.037258 kubelet[2904]: E0123 01:03:01.036253 2904 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 23 01:03:01.041682 kubelet[2904]: E0123 01:03:01.039054 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:01.569399 kubelet[2904]: I0123 01:03:01.568677 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.568655625 podStartE2EDuration="4.568655625s" podCreationTimestamp="2026-01-23 01:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:03:01.515432579 +0000 UTC m=+4.860929229" watchObservedRunningTime="2026-01-23 01:03:01.568655625 +0000 UTC m=+4.914152275" Jan 23 01:03:01.570287 kubelet[2904]: I0123 01:03:01.570190 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.569936728 podStartE2EDuration="4.569936728s" podCreationTimestamp="2026-01-23 01:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:02:58.962957468 +0000 UTC m=+2.308454128" watchObservedRunningTime="2026-01-23 01:03:01.569936728 +0000 UTC m=+4.915433377" Jan 23 01:03:01.877095 kubelet[2904]: I0123 01:03:01.870121 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.870094631 podStartE2EDuration="4.870094631s" podCreationTimestamp="2026-01-23 01:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:03:01.676280617 +0000 UTC m=+5.021777286" watchObservedRunningTime="2026-01-23 01:03:01.870094631 +0000 UTC m=+5.215591281" Jan 23 01:03:01.960358 kubelet[2904]: I0123 01:03:01.959739 2904 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 01:03:01.965476 containerd[1567]: time="2026-01-23T01:03:01.965030543Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:03:01.975756 kubelet[2904]: I0123 01:03:01.975441 2904 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 01:03:02.023470 kubelet[2904]: E0123 01:03:02.023174 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:02.509191 kubelet[2904]: E0123 01:03:02.504938 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:03.057131 kubelet[2904]: E0123 01:03:03.056289 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:03.059280 kubelet[2904]: E0123 01:03:03.055669 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:03.206988 systemd[1]: Created slice kubepods-besteffort-pod076335bb_6fd2_472f_80cf_e27f7795062a.slice - libcontainer container kubepods-besteffort-pod076335bb_6fd2_472f_80cf_e27f7795062a.slice. Jan 23 01:03:03.320562 kubelet[2904]: I0123 01:03:03.319213 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/076335bb-6fd2-472f-80cf-e27f7795062a-xtables-lock\") pod \"kube-proxy-mxpbj\" (UID: \"076335bb-6fd2-472f-80cf-e27f7795062a\") " pod="kube-system/kube-proxy-mxpbj" Jan 23 01:03:03.336380 kubelet[2904]: I0123 01:03:03.332268 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg56w\" (UniqueName: \"kubernetes.io/projected/076335bb-6fd2-472f-80cf-e27f7795062a-kube-api-access-kg56w\") pod \"kube-proxy-mxpbj\" (UID: \"076335bb-6fd2-472f-80cf-e27f7795062a\") " pod="kube-system/kube-proxy-mxpbj" Jan 23 01:03:03.337409 kubelet[2904]: I0123 01:03:03.337380 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/076335bb-6fd2-472f-80cf-e27f7795062a-kube-proxy\") pod \"kube-proxy-mxpbj\" (UID: \"076335bb-6fd2-472f-80cf-e27f7795062a\") " pod="kube-system/kube-proxy-mxpbj" Jan 23 01:03:03.337994 kubelet[2904]: I0123 01:03:03.337966 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/076335bb-6fd2-472f-80cf-e27f7795062a-lib-modules\") pod \"kube-proxy-mxpbj\" (UID: \"076335bb-6fd2-472f-80cf-e27f7795062a\") " pod="kube-system/kube-proxy-mxpbj" Jan 23 01:03:03.840658 kubelet[2904]: E0123 01:03:03.838627 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:03.862935 containerd[1567]: time="2026-01-23T01:03:03.862063983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxpbj,Uid:076335bb-6fd2-472f-80cf-e27f7795062a,Namespace:kube-system,Attempt:0,}" Jan 23 01:03:04.069971 kubelet[2904]: E0123 01:03:04.065141 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:04.179334 containerd[1567]: time="2026-01-23T01:03:04.178634570Z" level=info msg="connecting to shim 9723266ea16a75393f63a8fd2d8df4ce0fe93801580a68762865a6da4dce2d1c" address="unix:///run/containerd/s/1059463fe501d9a0462032dbd9ebe183b7c91a5fc80b0584bda6a86226d0aa29" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:03:04.456417 systemd[1]: Started cri-containerd-9723266ea16a75393f63a8fd2d8df4ce0fe93801580a68762865a6da4dce2d1c.scope - libcontainer container 9723266ea16a75393f63a8fd2d8df4ce0fe93801580a68762865a6da4dce2d1c. Jan 23 01:03:04.760333 containerd[1567]: time="2026-01-23T01:03:04.760176765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxpbj,Uid:076335bb-6fd2-472f-80cf-e27f7795062a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9723266ea16a75393f63a8fd2d8df4ce0fe93801580a68762865a6da4dce2d1c\"" Jan 23 01:03:04.773276 kubelet[2904]: E0123 01:03:04.772493 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:04.817175 containerd[1567]: time="2026-01-23T01:03:04.814446644Z" level=info msg="CreateContainer within sandbox \"9723266ea16a75393f63a8fd2d8df4ce0fe93801580a68762865a6da4dce2d1c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:03:04.904665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553603358.mount: Deactivated successfully. Jan 23 01:03:04.945281 containerd[1567]: time="2026-01-23T01:03:04.943672268Z" level=info msg="Container b466006d9c10a4c1bbc97d2122fde76e93a8bd11b923d30351690a4cb713f98a: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:04.950333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount806735503.mount: Deactivated successfully. Jan 23 01:03:05.157626 containerd[1567]: time="2026-01-23T01:03:05.153554890Z" level=info msg="CreateContainer within sandbox \"9723266ea16a75393f63a8fd2d8df4ce0fe93801580a68762865a6da4dce2d1c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b466006d9c10a4c1bbc97d2122fde76e93a8bd11b923d30351690a4cb713f98a\"" Jan 23 01:03:05.167436 containerd[1567]: time="2026-01-23T01:03:05.167284263Z" level=info msg="StartContainer for \"b466006d9c10a4c1bbc97d2122fde76e93a8bd11b923d30351690a4cb713f98a\"" Jan 23 01:03:05.206674 containerd[1567]: time="2026-01-23T01:03:05.205607320Z" level=info msg="connecting to shim b466006d9c10a4c1bbc97d2122fde76e93a8bd11b923d30351690a4cb713f98a" address="unix:///run/containerd/s/1059463fe501d9a0462032dbd9ebe183b7c91a5fc80b0584bda6a86226d0aa29" protocol=ttrpc version=3 Jan 23 01:03:05.267074 systemd[1]: Created slice kubepods-besteffort-pod945b234a_0158_42c0_a794_2dc4fe943a81.slice - libcontainer container kubepods-besteffort-pod945b234a_0158_42c0_a794_2dc4fe943a81.slice. Jan 23 01:03:05.287982 kubelet[2904]: I0123 01:03:05.284421 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv8km\" (UniqueName: \"kubernetes.io/projected/945b234a-0158-42c0-a794-2dc4fe943a81-kube-api-access-bv8km\") pod \"tigera-operator-7dcd859c48-cldlb\" (UID: \"945b234a-0158-42c0-a794-2dc4fe943a81\") " pod="tigera-operator/tigera-operator-7dcd859c48-cldlb" Jan 23 01:03:05.287982 kubelet[2904]: I0123 01:03:05.284476 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/945b234a-0158-42c0-a794-2dc4fe943a81-var-lib-calico\") pod \"tigera-operator-7dcd859c48-cldlb\" (UID: \"945b234a-0158-42c0-a794-2dc4fe943a81\") " pod="tigera-operator/tigera-operator-7dcd859c48-cldlb" Jan 23 01:03:05.409252 systemd[1]: Started cri-containerd-b466006d9c10a4c1bbc97d2122fde76e93a8bd11b923d30351690a4cb713f98a.scope - libcontainer container b466006d9c10a4c1bbc97d2122fde76e93a8bd11b923d30351690a4cb713f98a. Jan 23 01:03:05.605699 containerd[1567]: time="2026-01-23T01:03:05.603582696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cldlb,Uid:945b234a-0158-42c0-a794-2dc4fe943a81,Namespace:tigera-operator,Attempt:0,}" Jan 23 01:03:05.778197 containerd[1567]: time="2026-01-23T01:03:05.766375044Z" level=info msg="connecting to shim 0638e95e8d7dbbccc52f5f8a124bb807acfad8890e8a77fd67784d1b8a593b80" address="unix:///run/containerd/s/5086656fca259173acee8edac412102f364624599247dcf6e2f82afe9ed03745" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:03:05.889711 containerd[1567]: time="2026-01-23T01:03:05.889487702Z" level=info msg="StartContainer for \"b466006d9c10a4c1bbc97d2122fde76e93a8bd11b923d30351690a4cb713f98a\" returns successfully" Jan 23 01:03:06.029447 systemd[1]: Started cri-containerd-0638e95e8d7dbbccc52f5f8a124bb807acfad8890e8a77fd67784d1b8a593b80.scope - libcontainer container 0638e95e8d7dbbccc52f5f8a124bb807acfad8890e8a77fd67784d1b8a593b80. Jan 23 01:03:06.297600 kubelet[2904]: E0123 01:03:06.297372 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:06.478160 kubelet[2904]: E0123 01:03:06.472693 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:06.494430 containerd[1567]: time="2026-01-23T01:03:06.491648474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cldlb,Uid:945b234a-0158-42c0-a794-2dc4fe943a81,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0638e95e8d7dbbccc52f5f8a124bb807acfad8890e8a77fd67784d1b8a593b80\"" Jan 23 01:03:06.624490 containerd[1567]: time="2026-01-23T01:03:06.621250879Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 01:03:06.850676 kubelet[2904]: I0123 01:03:06.849554 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mxpbj" podStartSLOduration=4.849469668 podStartE2EDuration="4.849469668s" podCreationTimestamp="2026-01-23 01:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:03:06.374688401 +0000 UTC m=+9.720185081" watchObservedRunningTime="2026-01-23 01:03:06.849469668 +0000 UTC m=+10.194966318" Jan 23 01:03:07.366541 kubelet[2904]: E0123 01:03:07.365410 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:07.366541 kubelet[2904]: E0123 01:03:07.365664 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:08.768407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078137573.mount: Deactivated successfully. Jan 23 01:03:20.869739 containerd[1567]: time="2026-01-23T01:03:20.868061492Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:20.873352 containerd[1567]: time="2026-01-23T01:03:20.873036807Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 01:03:20.875702 containerd[1567]: time="2026-01-23T01:03:20.875407199Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:20.889706 containerd[1567]: time="2026-01-23T01:03:20.889453856Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:20.895913 containerd[1567]: time="2026-01-23T01:03:20.895364187Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 14.27396605s" Jan 23 01:03:20.895913 containerd[1567]: time="2026-01-23T01:03:20.895493819Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 01:03:20.923613 containerd[1567]: time="2026-01-23T01:03:20.923392794Z" level=info msg="CreateContainer within sandbox \"0638e95e8d7dbbccc52f5f8a124bb807acfad8890e8a77fd67784d1b8a593b80\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 01:03:21.028170 containerd[1567]: time="2026-01-23T01:03:21.028116102Z" level=info msg="Container 230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:21.073402 containerd[1567]: time="2026-01-23T01:03:21.072363203Z" level=info msg="CreateContainer within sandbox \"0638e95e8d7dbbccc52f5f8a124bb807acfad8890e8a77fd67784d1b8a593b80\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56\"" Jan 23 01:03:21.074508 containerd[1567]: time="2026-01-23T01:03:21.074384384Z" level=info msg="StartContainer for \"230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56\"" Jan 23 01:03:21.081713 containerd[1567]: time="2026-01-23T01:03:21.080194258Z" level=info msg="connecting to shim 230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56" address="unix:///run/containerd/s/5086656fca259173acee8edac412102f364624599247dcf6e2f82afe9ed03745" protocol=ttrpc version=3 Jan 23 01:03:21.398117 systemd[1]: Started cri-containerd-230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56.scope - libcontainer container 230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56. Jan 23 01:03:22.512255 containerd[1567]: time="2026-01-23T01:03:22.512098071Z" level=info msg="StartContainer for \"230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56\" returns successfully" Jan 23 01:03:22.960105 kubelet[2904]: I0123 01:03:22.956513 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-cldlb" podStartSLOduration=3.6172509379999997 podStartE2EDuration="17.956469136s" podCreationTimestamp="2026-01-23 01:03:05 +0000 UTC" firstStartedPulling="2026-01-23 01:03:06.564540701 +0000 UTC m=+9.910037351" lastFinishedPulling="2026-01-23 01:03:20.903758898 +0000 UTC m=+24.249255549" observedRunningTime="2026-01-23 01:03:22.937520558 +0000 UTC m=+26.283017208" watchObservedRunningTime="2026-01-23 01:03:22.956469136 +0000 UTC m=+26.301965796" Jan 23 01:03:31.965580 systemd[1]: cri-containerd-230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56.scope: Deactivated successfully. Jan 23 01:03:31.991632 systemd[1]: cri-containerd-230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56.scope: Consumed 1.934s CPU time, 38.5M memory peak. Jan 23 01:03:33.344096 containerd[1567]: time="2026-01-23T01:03:33.342362668Z" level=info msg="received container exit event container_id:\"230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56\" id:\"230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56\" pid:3244 exit_status:1 exited_at:{seconds:1769130213 nanos:217631449}" Jan 23 01:03:44.544607 containerd[1567]: time="2026-01-23T01:03:44.444662430Z" level=error msg="failed to handle container TaskExit event container_id:\"230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56\" id:\"230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56\" pid:3244 exit_status:1 exited_at:{seconds:1769130213 nanos:217631449}" error="failed to stop container: context deadline exceeded" Jan 23 01:03:46.203445 containerd[1567]: time="2026-01-23T01:03:46.193416615Z" level=info msg="TaskExit event container_id:\"230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56\" id:\"230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56\" pid:3244 exit_status:1 exited_at:{seconds:1769130213 nanos:217631449}" Jan 23 01:03:46.536890 containerd[1567]: time="2026-01-23T01:03:46.531664588Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Jan 23 01:03:46.617188 kubelet[2904]: E0123 01:03:46.616503 2904 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.565s" Jan 23 01:03:47.052191 containerd[1567]: time="2026-01-23T01:03:47.052083062Z" level=error msg="ttrpc: received message on inactive stream" stream=25 Jan 23 01:03:47.141067 systemd[1]: cri-containerd-9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398.scope: Deactivated successfully. Jan 23 01:03:47.146488 systemd[1]: cri-containerd-9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398.scope: Consumed 6.096s CPU time, 22.5M memory peak. Jan 23 01:03:47.266170 systemd[1]: cri-containerd-716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1.scope: Deactivated successfully. Jan 23 01:03:47.267047 systemd[1]: cri-containerd-716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1.scope: Consumed 11.957s CPU time, 45.3M memory peak. Jan 23 01:03:47.330627 containerd[1567]: time="2026-01-23T01:03:47.327525191Z" level=info msg="received container exit event container_id:\"9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398\" id:\"9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398\" pid:2749 exit_status:1 exited_at:{seconds:1769130227 nanos:315709214}" Jan 23 01:03:47.457059 containerd[1567]: time="2026-01-23T01:03:47.456147440Z" level=info msg="received container exit event container_id:\"716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1\" id:\"716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1\" pid:2721 exit_status:1 exited_at:{seconds:1769130227 nanos:402403876}" Jan 23 01:03:47.869732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56-rootfs.mount: Deactivated successfully. Jan 23 01:03:47.976113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398-rootfs.mount: Deactivated successfully. Jan 23 01:03:48.114687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1-rootfs.mount: Deactivated successfully. Jan 23 01:03:48.567624 update_engine[1548]: I20260123 01:03:48.564527 1548 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 01:03:48.567624 update_engine[1548]: I20260123 01:03:48.566720 1548 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 01:03:48.587558 update_engine[1548]: I20260123 01:03:48.581055 1548 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 01:03:48.590386 kubelet[2904]: I0123 01:03:48.590192 2904 scope.go:117] "RemoveContainer" containerID="716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1" Jan 23 01:03:48.593503 kubelet[2904]: I0123 01:03:48.591148 2904 scope.go:117] "RemoveContainer" containerID="9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398" Jan 23 01:03:48.596461 update_engine[1548]: I20260123 01:03:48.593351 1548 omaha_request_params.cc:62] Current group set to stable Jan 23 01:03:48.597958 kubelet[2904]: E0123 01:03:48.596704 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:48.604625 kubelet[2904]: E0123 01:03:48.604423 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:48.605578 update_engine[1548]: I20260123 01:03:48.605473 1548 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 01:03:48.605578 update_engine[1548]: I20260123 01:03:48.605546 1548 update_attempter.cc:643] Scheduling an action processor start. Jan 23 01:03:48.610430 update_engine[1548]: I20260123 01:03:48.605582 1548 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 01:03:48.622993 update_engine[1548]: I20260123 01:03:48.620594 1548 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 01:03:48.623615 update_engine[1548]: I20260123 01:03:48.623579 1548 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 01:03:48.623700 update_engine[1548]: I20260123 01:03:48.623680 1548 omaha_request_action.cc:272] Request: Jan 23 01:03:48.623700 update_engine[1548]: Jan 23 01:03:48.623700 update_engine[1548]: Jan 23 01:03:48.623700 update_engine[1548]: Jan 23 01:03:48.623700 update_engine[1548]: Jan 23 01:03:48.623700 update_engine[1548]: Jan 23 01:03:48.623700 update_engine[1548]: Jan 23 01:03:48.623700 update_engine[1548]: Jan 23 01:03:48.623700 update_engine[1548]: Jan 23 01:03:48.627136 update_engine[1548]: I20260123 01:03:48.625555 1548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 01:03:48.641680 kubelet[2904]: I0123 01:03:48.634693 2904 scope.go:117] "RemoveContainer" containerID="230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56" Jan 23 01:03:48.716101 update_engine[1548]: I20260123 01:03:48.716017 1548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 01:03:48.764432 update_engine[1548]: I20260123 01:03:48.763365 1548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 01:03:48.790400 update_engine[1548]: E20260123 01:03:48.782510 1548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 01:03:48.790400 update_engine[1548]: I20260123 01:03:48.783404 1548 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 01:03:48.816757 containerd[1567]: time="2026-01-23T01:03:48.816582177Z" level=info msg="CreateContainer within sandbox \"5cb66d823f37deaa9c9698cb19aeb57448cff62f3b1c24cf91cfc755e8cf47e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 01:03:48.825295 containerd[1567]: time="2026-01-23T01:03:48.821692116Z" level=info msg="CreateContainer within sandbox \"0638e95e8d7dbbccc52f5f8a124bb807acfad8890e8a77fd67784d1b8a593b80\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 01:03:48.829001 containerd[1567]: time="2026-01-23T01:03:48.827453990Z" level=info msg="CreateContainer within sandbox \"30da438abf3dc33a4fd7453526e15258c1a94f8ff7121750e84b72ec1471c1c5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 01:03:49.052524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480238808.mount: Deactivated successfully. Jan 23 01:03:49.097705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1002323211.mount: Deactivated successfully. Jan 23 01:03:49.099350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973798616.mount: Deactivated successfully. Jan 23 01:03:49.108325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4151230001.mount: Deactivated successfully. Jan 23 01:03:49.130135 containerd[1567]: time="2026-01-23T01:03:49.128554726Z" level=info msg="Container 2b27375bf613e1a1f20f8525070b4f234b1a6428470ffc4922cbb050692c69c4: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:49.150145 locksmithd[1613]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 01:03:49.159688 containerd[1567]: time="2026-01-23T01:03:49.156672430Z" level=info msg="Container 00ecf8a5373306f8b1d7009e451bfea59a1bc12435483b090bda6249c6fb4b23: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:49.161127 containerd[1567]: time="2026-01-23T01:03:49.159572200Z" level=info msg="Container 93f166fed0d74b1dc975186b30f1030661fb778681600a75d60e713bcb2bcc92: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:49.222624 containerd[1567]: time="2026-01-23T01:03:49.222570694Z" level=info msg="CreateContainer within sandbox \"0638e95e8d7dbbccc52f5f8a124bb807acfad8890e8a77fd67784d1b8a593b80\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2b27375bf613e1a1f20f8525070b4f234b1a6428470ffc4922cbb050692c69c4\"" Jan 23 01:03:49.234050 containerd[1567]: time="2026-01-23T01:03:49.234011836Z" level=info msg="StartContainer for \"2b27375bf613e1a1f20f8525070b4f234b1a6428470ffc4922cbb050692c69c4\"" Jan 23 01:03:49.237624 containerd[1567]: time="2026-01-23T01:03:49.237319207Z" level=info msg="connecting to shim 2b27375bf613e1a1f20f8525070b4f234b1a6428470ffc4922cbb050692c69c4" address="unix:///run/containerd/s/5086656fca259173acee8edac412102f364624599247dcf6e2f82afe9ed03745" protocol=ttrpc version=3 Jan 23 01:03:49.371750 containerd[1567]: time="2026-01-23T01:03:49.369623535Z" level=info msg="CreateContainer within sandbox \"30da438abf3dc33a4fd7453526e15258c1a94f8ff7121750e84b72ec1471c1c5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"93f166fed0d74b1dc975186b30f1030661fb778681600a75d60e713bcb2bcc92\"" Jan 23 01:03:49.403668 containerd[1567]: time="2026-01-23T01:03:49.397629543Z" level=info msg="CreateContainer within sandbox \"5cb66d823f37deaa9c9698cb19aeb57448cff62f3b1c24cf91cfc755e8cf47e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"00ecf8a5373306f8b1d7009e451bfea59a1bc12435483b090bda6249c6fb4b23\"" Jan 23 01:03:49.403668 containerd[1567]: time="2026-01-23T01:03:49.400691531Z" level=info msg="StartContainer for \"93f166fed0d74b1dc975186b30f1030661fb778681600a75d60e713bcb2bcc92\"" Jan 23 01:03:49.403668 containerd[1567]: time="2026-01-23T01:03:49.403037476Z" level=info msg="StartContainer for \"00ecf8a5373306f8b1d7009e451bfea59a1bc12435483b090bda6249c6fb4b23\"" Jan 23 01:03:49.410677 containerd[1567]: time="2026-01-23T01:03:49.410092526Z" level=info msg="connecting to shim 93f166fed0d74b1dc975186b30f1030661fb778681600a75d60e713bcb2bcc92" address="unix:///run/containerd/s/28c5fc82a494fdc38b6fdcd2f54971a9f8510f9f777f41461a106ff78a8226bf" protocol=ttrpc version=3 Jan 23 01:03:49.509016 containerd[1567]: time="2026-01-23T01:03:49.508960083Z" level=info msg="connecting to shim 00ecf8a5373306f8b1d7009e451bfea59a1bc12435483b090bda6249c6fb4b23" address="unix:///run/containerd/s/9ad0c747677d8059b86e138b51b32c656785bfd760ecc3df49213be43bbd40da" protocol=ttrpc version=3 Jan 23 01:03:49.512517 systemd[1]: Started cri-containerd-2b27375bf613e1a1f20f8525070b4f234b1a6428470ffc4922cbb050692c69c4.scope - libcontainer container 2b27375bf613e1a1f20f8525070b4f234b1a6428470ffc4922cbb050692c69c4. Jan 23 01:03:49.634518 systemd[1]: Started cri-containerd-93f166fed0d74b1dc975186b30f1030661fb778681600a75d60e713bcb2bcc92.scope - libcontainer container 93f166fed0d74b1dc975186b30f1030661fb778681600a75d60e713bcb2bcc92. Jan 23 01:03:49.945598 systemd[1]: Started cri-containerd-00ecf8a5373306f8b1d7009e451bfea59a1bc12435483b090bda6249c6fb4b23.scope - libcontainer container 00ecf8a5373306f8b1d7009e451bfea59a1bc12435483b090bda6249c6fb4b23. Jan 23 01:03:50.116465 containerd[1567]: time="2026-01-23T01:03:50.116408160Z" level=info msg="StartContainer for \"2b27375bf613e1a1f20f8525070b4f234b1a6428470ffc4922cbb050692c69c4\" returns successfully" Jan 23 01:03:50.220542 containerd[1567]: time="2026-01-23T01:03:50.218075411Z" level=info msg="StartContainer for \"93f166fed0d74b1dc975186b30f1030661fb778681600a75d60e713bcb2bcc92\" returns successfully" Jan 23 01:03:50.508529 containerd[1567]: time="2026-01-23T01:03:50.506079105Z" level=info msg="StartContainer for \"00ecf8a5373306f8b1d7009e451bfea59a1bc12435483b090bda6249c6fb4b23\" returns successfully" Jan 23 01:03:50.874001 kubelet[2904]: E0123 01:03:50.873507 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:51.015997 kubelet[2904]: E0123 01:03:51.013498 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:52.030090 kubelet[2904]: E0123 01:03:52.029724 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:52.497483 kubelet[2904]: E0123 01:03:52.496966 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:53.039713 kubelet[2904]: E0123 01:03:53.039429 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:55.329367 sudo[1788]: pam_unix(sudo:session): session closed for user root Jan 23 01:03:55.354996 sshd[1787]: Connection closed by 10.0.0.1 port 39404 Jan 23 01:03:55.367433 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:55.402560 systemd[1]: sshd@6-10.0.0.18:22-10.0.0.1:39404.service: Deactivated successfully. Jan 23 01:03:55.412639 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:03:55.413632 systemd[1]: session-7.scope: Consumed 29.070s CPU time, 226.8M memory peak. Jan 23 01:03:55.432353 systemd-logind[1547]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:03:55.450406 systemd-logind[1547]: Removed session 7. Jan 23 01:03:56.396028 kubelet[2904]: E0123 01:03:56.395673 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:03:58.548621 update_engine[1548]: I20260123 01:03:58.546571 1548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 01:03:58.548621 update_engine[1548]: I20260123 01:03:58.546689 1548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 01:03:58.550142 update_engine[1548]: I20260123 01:03:58.550001 1548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 01:03:58.569980 update_engine[1548]: E20260123 01:03:58.569614 1548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 01:03:58.570344 update_engine[1548]: I20260123 01:03:58.570308 1548 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 23 01:04:02.533609 kubelet[2904]: E0123 01:04:02.533083 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:06.426195 kubelet[2904]: E0123 01:04:06.424566 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:08.607074 update_engine[1548]: I20260123 01:04:08.575423 1548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 01:04:09.859098 update_engine[1548]: I20260123 01:04:08.667522 1548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 01:04:13.668302 update_engine[1548]: I20260123 01:04:10.573448 1548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 01:04:13.668302 update_engine[1548]: E20260123 01:04:12.252630 1548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 01:04:13.668302 update_engine[1548]: I20260123 01:04:13.633643 1548 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 23 01:04:13.840065 kubelet[2904]: E0123 01:04:13.839967 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:16.564319 kubelet[2904]: E0123 01:04:16.562722 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:21.796968 systemd[1]: Created slice kubepods-besteffort-pod62c95a0a_a81e_4e5a_8918_dab5b64b362f.slice - libcontainer container kubepods-besteffort-pod62c95a0a_a81e_4e5a_8918_dab5b64b362f.slice. Jan 23 01:04:21.875741 kubelet[2904]: I0123 01:04:21.875021 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62c95a0a-a81e-4e5a-8918-dab5b64b362f-tigera-ca-bundle\") pod \"calico-typha-574c85cf64-kstz2\" (UID: \"62c95a0a-a81e-4e5a-8918-dab5b64b362f\") " pod="calico-system/calico-typha-574c85cf64-kstz2" Jan 23 01:04:21.875741 kubelet[2904]: I0123 01:04:21.875216 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bmjs\" (UniqueName: \"kubernetes.io/projected/62c95a0a-a81e-4e5a-8918-dab5b64b362f-kube-api-access-5bmjs\") pod \"calico-typha-574c85cf64-kstz2\" (UID: \"62c95a0a-a81e-4e5a-8918-dab5b64b362f\") " pod="calico-system/calico-typha-574c85cf64-kstz2" Jan 23 01:04:21.875741 kubelet[2904]: I0123 01:04:21.875261 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/62c95a0a-a81e-4e5a-8918-dab5b64b362f-typha-certs\") pod \"calico-typha-574c85cf64-kstz2\" (UID: \"62c95a0a-a81e-4e5a-8918-dab5b64b362f\") " pod="calico-system/calico-typha-574c85cf64-kstz2" Jan 23 01:04:22.362974 systemd[1]: Created slice kubepods-besteffort-podbe6e6546_540b_4a5f_934f_0dcc8f653eb0.slice - libcontainer container kubepods-besteffort-podbe6e6546_540b_4a5f_934f_0dcc8f653eb0.slice. Jan 23 01:04:22.424017 kubelet[2904]: I0123 01:04:22.419415 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/be6e6546-540b-4a5f-934f-0dcc8f653eb0-node-certs\") pod \"calico-node-q9nkf\" (UID: \"be6e6546-540b-4a5f-934f-0dcc8f653eb0\") " pod="calico-system/calico-node-q9nkf" Jan 23 01:04:22.437985 kubelet[2904]: I0123 01:04:22.431957 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be6e6546-540b-4a5f-934f-0dcc8f653eb0-tigera-ca-bundle\") pod \"calico-node-q9nkf\" (UID: \"be6e6546-540b-4a5f-934f-0dcc8f653eb0\") " pod="calico-system/calico-node-q9nkf" Jan 23 01:04:22.437985 kubelet[2904]: I0123 01:04:22.434198 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/be6e6546-540b-4a5f-934f-0dcc8f653eb0-cni-log-dir\") pod \"calico-node-q9nkf\" (UID: \"be6e6546-540b-4a5f-934f-0dcc8f653eb0\") " pod="calico-system/calico-node-q9nkf" Jan 23 01:04:22.437985 kubelet[2904]: I0123 01:04:22.434267 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/be6e6546-540b-4a5f-934f-0dcc8f653eb0-flexvol-driver-host\") pod \"calico-node-q9nkf\" (UID: \"be6e6546-540b-4a5f-934f-0dcc8f653eb0\") " pod="calico-system/calico-node-q9nkf" Jan 23 01:04:22.437985 kubelet[2904]: I0123 01:04:22.434368 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/be6e6546-540b-4a5f-934f-0dcc8f653eb0-var-lib-calico\") pod \"calico-node-q9nkf\" (UID: \"be6e6546-540b-4a5f-934f-0dcc8f653eb0\") " pod="calico-system/calico-node-q9nkf" Jan 23 01:04:22.437985 kubelet[2904]: I0123 01:04:22.434460 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/be6e6546-540b-4a5f-934f-0dcc8f653eb0-var-run-calico\") pod \"calico-node-q9nkf\" (UID: \"be6e6546-540b-4a5f-934f-0dcc8f653eb0\") " pod="calico-system/calico-node-q9nkf" Jan 23 01:04:22.450703 kubelet[2904]: I0123 01:04:22.434490 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/be6e6546-540b-4a5f-934f-0dcc8f653eb0-cni-net-dir\") pod \"calico-node-q9nkf\" (UID: \"be6e6546-540b-4a5f-934f-0dcc8f653eb0\") " pod="calico-system/calico-node-q9nkf" Jan 23 01:04:22.450703 kubelet[2904]: I0123 01:04:22.434513 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/be6e6546-540b-4a5f-934f-0dcc8f653eb0-policysync\") pod \"calico-node-q9nkf\" (UID: \"be6e6546-540b-4a5f-934f-0dcc8f653eb0\") " pod="calico-system/calico-node-q9nkf" Jan 23 01:04:22.450703 kubelet[2904]: I0123 01:04:22.434542 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lslj\" (UniqueName: \"kubernetes.io/projected/be6e6546-540b-4a5f-934f-0dcc8f653eb0-kube-api-access-7lslj\") pod \"calico-node-q9nkf\" (UID: \"be6e6546-540b-4a5f-934f-0dcc8f653eb0\") " pod="calico-system/calico-node-q9nkf" Jan 23 01:04:22.450703 kubelet[2904]: I0123 01:04:22.434575 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/be6e6546-540b-4a5f-934f-0dcc8f653eb0-cni-bin-dir\") pod \"calico-node-q9nkf\" (UID: \"be6e6546-540b-4a5f-934f-0dcc8f653eb0\") " pod="calico-system/calico-node-q9nkf" Jan 23 01:04:22.450703 kubelet[2904]: I0123 01:04:22.434600 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be6e6546-540b-4a5f-934f-0dcc8f653eb0-lib-modules\") pod \"calico-node-q9nkf\" (UID: \"be6e6546-540b-4a5f-934f-0dcc8f653eb0\") " pod="calico-system/calico-node-q9nkf" Jan 23 01:04:22.453684 containerd[1567]: time="2026-01-23T01:04:22.441444361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-574c85cf64-kstz2,Uid:62c95a0a-a81e-4e5a-8918-dab5b64b362f,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:22.461479 kubelet[2904]: I0123 01:04:22.434621 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be6e6546-540b-4a5f-934f-0dcc8f653eb0-xtables-lock\") pod \"calico-node-q9nkf\" (UID: \"be6e6546-540b-4a5f-934f-0dcc8f653eb0\") " pod="calico-system/calico-node-q9nkf" Jan 23 01:04:22.461479 kubelet[2904]: E0123 01:04:22.438011 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:22.503742 kubelet[2904]: E0123 01:04:22.502376 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:04:22.536594 kubelet[2904]: I0123 01:04:22.536534 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/77cee7a3-d314-42b2-8d1b-22ce21da8d56-registration-dir\") pod \"csi-node-driver-pk4tl\" (UID: \"77cee7a3-d314-42b2-8d1b-22ce21da8d56\") " pod="calico-system/csi-node-driver-pk4tl" Jan 23 01:04:22.541327 kubelet[2904]: I0123 01:04:22.539329 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/77cee7a3-d314-42b2-8d1b-22ce21da8d56-socket-dir\") pod \"csi-node-driver-pk4tl\" (UID: \"77cee7a3-d314-42b2-8d1b-22ce21da8d56\") " pod="calico-system/csi-node-driver-pk4tl" Jan 23 01:04:22.561627 kubelet[2904]: I0123 01:04:22.561429 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77cee7a3-d314-42b2-8d1b-22ce21da8d56-kubelet-dir\") pod \"csi-node-driver-pk4tl\" (UID: \"77cee7a3-d314-42b2-8d1b-22ce21da8d56\") " pod="calico-system/csi-node-driver-pk4tl" Jan 23 01:04:22.562061 kubelet[2904]: I0123 01:04:22.562026 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqkwr\" (UniqueName: \"kubernetes.io/projected/77cee7a3-d314-42b2-8d1b-22ce21da8d56-kube-api-access-kqkwr\") pod \"csi-node-driver-pk4tl\" (UID: \"77cee7a3-d314-42b2-8d1b-22ce21da8d56\") " pod="calico-system/csi-node-driver-pk4tl" Jan 23 01:04:22.562383 kubelet[2904]: I0123 01:04:22.562352 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/77cee7a3-d314-42b2-8d1b-22ce21da8d56-varrun\") pod \"csi-node-driver-pk4tl\" (UID: \"77cee7a3-d314-42b2-8d1b-22ce21da8d56\") " pod="calico-system/csi-node-driver-pk4tl" Jan 23 01:04:22.601933 kubelet[2904]: E0123 01:04:22.601883 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.602196 kubelet[2904]: W0123 01:04:22.602164 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.602431 kubelet[2904]: E0123 01:04:22.602403 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.604275 kubelet[2904]: E0123 01:04:22.604251 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.604414 kubelet[2904]: W0123 01:04:22.604389 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.604523 kubelet[2904]: E0123 01:04:22.604503 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.606408 kubelet[2904]: E0123 01:04:22.606385 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.606508 kubelet[2904]: W0123 01:04:22.606491 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.606596 kubelet[2904]: E0123 01:04:22.606579 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.607583 kubelet[2904]: E0123 01:04:22.607562 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.607683 kubelet[2904]: W0123 01:04:22.607664 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.607757 kubelet[2904]: E0123 01:04:22.607742 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.609178 kubelet[2904]: E0123 01:04:22.609155 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.609292 kubelet[2904]: W0123 01:04:22.609272 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.609386 kubelet[2904]: E0123 01:04:22.609367 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.614630 kubelet[2904]: E0123 01:04:22.614488 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.614752 kubelet[2904]: W0123 01:04:22.614731 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.614999 kubelet[2904]: E0123 01:04:22.614974 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.616271 kubelet[2904]: E0123 01:04:22.616253 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.616571 kubelet[2904]: W0123 01:04:22.616551 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.616998 kubelet[2904]: E0123 01:04:22.616978 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.618290 kubelet[2904]: E0123 01:04:22.618184 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.618685 kubelet[2904]: W0123 01:04:22.618656 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.618929 kubelet[2904]: E0123 01:04:22.618904 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.619919 kubelet[2904]: E0123 01:04:22.619895 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.620019 kubelet[2904]: W0123 01:04:22.620001 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.620201 kubelet[2904]: E0123 01:04:22.620179 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.621383 kubelet[2904]: E0123 01:04:22.621363 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.621484 kubelet[2904]: W0123 01:04:22.621464 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.621569 kubelet[2904]: E0123 01:04:22.621552 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.622380 kubelet[2904]: E0123 01:04:22.622360 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.622480 kubelet[2904]: W0123 01:04:22.622461 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.622571 kubelet[2904]: E0123 01:04:22.622553 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.623387 kubelet[2904]: E0123 01:04:22.623366 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.623633 kubelet[2904]: W0123 01:04:22.623613 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.623741 kubelet[2904]: E0123 01:04:22.623719 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.629027 kubelet[2904]: E0123 01:04:22.629001 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.629251 kubelet[2904]: W0123 01:04:22.629226 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.629347 kubelet[2904]: E0123 01:04:22.629327 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.633064 kubelet[2904]: E0123 01:04:22.633040 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.633274 kubelet[2904]: W0123 01:04:22.633251 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.633387 kubelet[2904]: E0123 01:04:22.633366 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.644355 kubelet[2904]: E0123 01:04:22.643440 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.645505 kubelet[2904]: W0123 01:04:22.644667 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.647299 kubelet[2904]: E0123 01:04:22.646748 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.663365 kubelet[2904]: E0123 01:04:22.662942 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.665208 kubelet[2904]: W0123 01:04:22.664150 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.665208 kubelet[2904]: E0123 01:04:22.664265 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.666281 kubelet[2904]: E0123 01:04:22.666256 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.666400 kubelet[2904]: W0123 01:04:22.666378 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.666514 kubelet[2904]: E0123 01:04:22.666492 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.668943 kubelet[2904]: E0123 01:04:22.668924 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.669032 kubelet[2904]: W0123 01:04:22.669016 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.669210 kubelet[2904]: E0123 01:04:22.669188 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.669608 kubelet[2904]: E0123 01:04:22.669590 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.669694 kubelet[2904]: W0123 01:04:22.669677 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.669903 kubelet[2904]: E0123 01:04:22.669877 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.670568 kubelet[2904]: E0123 01:04:22.670549 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.671275 kubelet[2904]: W0123 01:04:22.670628 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.672309 kubelet[2904]: E0123 01:04:22.672008 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.676985 kubelet[2904]: E0123 01:04:22.676180 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.676985 kubelet[2904]: W0123 01:04:22.676216 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.676985 kubelet[2904]: E0123 01:04:22.676245 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.677235 kubelet[2904]: E0123 01:04:22.676990 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.677235 kubelet[2904]: W0123 01:04:22.677009 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.677235 kubelet[2904]: E0123 01:04:22.677031 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.677528 kubelet[2904]: E0123 01:04:22.677430 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.677528 kubelet[2904]: W0123 01:04:22.677516 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.677607 kubelet[2904]: E0123 01:04:22.677537 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.699175 kubelet[2904]: E0123 01:04:22.697304 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.699175 kubelet[2904]: W0123 01:04:22.697330 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.699175 kubelet[2904]: E0123 01:04:22.697351 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.699175 kubelet[2904]: E0123 01:04:22.698394 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.699175 kubelet[2904]: W0123 01:04:22.698410 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.699175 kubelet[2904]: E0123 01:04:22.698428 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.701049 kubelet[2904]: E0123 01:04:22.700751 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.701049 kubelet[2904]: W0123 01:04:22.700972 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.701049 kubelet[2904]: E0123 01:04:22.700995 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.702455 kubelet[2904]: E0123 01:04:22.702388 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.702455 kubelet[2904]: W0123 01:04:22.702409 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.702455 kubelet[2904]: E0123 01:04:22.702425 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.704680 kubelet[2904]: E0123 01:04:22.704376 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.704680 kubelet[2904]: W0123 01:04:22.704455 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.704680 kubelet[2904]: E0123 01:04:22.704474 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.705598 kubelet[2904]: E0123 01:04:22.705349 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.705598 kubelet[2904]: W0123 01:04:22.705420 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.705598 kubelet[2904]: E0123 01:04:22.705438 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.709382 kubelet[2904]: E0123 01:04:22.708360 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.709382 kubelet[2904]: W0123 01:04:22.708434 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.709382 kubelet[2904]: E0123 01:04:22.708453 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.709382 kubelet[2904]: E0123 01:04:22.709322 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.709382 kubelet[2904]: W0123 01:04:22.709338 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.709382 kubelet[2904]: E0123 01:04:22.709353 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.712423 kubelet[2904]: E0123 01:04:22.712388 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.712517 kubelet[2904]: W0123 01:04:22.712498 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.712596 kubelet[2904]: E0123 01:04:22.712579 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.717055 kubelet[2904]: E0123 01:04:22.717029 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.717280 kubelet[2904]: W0123 01:04:22.717255 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.717379 kubelet[2904]: E0123 01:04:22.717357 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.722718 kubelet[2904]: E0123 01:04:22.722595 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.727201 kubelet[2904]: W0123 01:04:22.726959 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.727736 kubelet[2904]: E0123 01:04:22.727603 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.730612 kubelet[2904]: E0123 01:04:22.730509 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.730727 kubelet[2904]: W0123 01:04:22.730709 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.731366 kubelet[2904]: E0123 01:04:22.731001 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.736345 kubelet[2904]: E0123 01:04:22.736276 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.736345 kubelet[2904]: W0123 01:04:22.736300 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.736345 kubelet[2904]: E0123 01:04:22.736323 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.741065 kubelet[2904]: E0123 01:04:22.741046 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.741431 kubelet[2904]: W0123 01:04:22.741215 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.741431 kubelet[2904]: E0123 01:04:22.741241 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.743660 kubelet[2904]: E0123 01:04:22.743642 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.743752 kubelet[2904]: W0123 01:04:22.743733 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.744000 kubelet[2904]: E0123 01:04:22.743981 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.746616 kubelet[2904]: E0123 01:04:22.746552 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.746616 kubelet[2904]: W0123 01:04:22.746572 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.746616 kubelet[2904]: E0123 01:04:22.746591 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.748250 kubelet[2904]: E0123 01:04:22.748202 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.748250 kubelet[2904]: W0123 01:04:22.748218 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.748250 kubelet[2904]: E0123 01:04:22.748233 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.754189 kubelet[2904]: E0123 01:04:22.753999 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.754189 kubelet[2904]: W0123 01:04:22.754039 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.754189 kubelet[2904]: E0123 01:04:22.754074 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.757013 kubelet[2904]: E0123 01:04:22.756986 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.757861 kubelet[2904]: W0123 01:04:22.757196 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.757861 kubelet[2904]: E0123 01:04:22.757230 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.759482 kubelet[2904]: E0123 01:04:22.759461 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.759572 kubelet[2904]: W0123 01:04:22.759556 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.759651 kubelet[2904]: E0123 01:04:22.759633 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.764043 kubelet[2904]: E0123 01:04:22.763956 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:22.764043 kubelet[2904]: W0123 01:04:22.763974 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:22.764043 kubelet[2904]: E0123 01:04:22.763996 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:22.874409 containerd[1567]: time="2026-01-23T01:04:22.872300058Z" level=info msg="connecting to shim 9efe925c8cd60e8c5aa5e5856f9cb69df36f622bed2a008d0f9a5d8f95a4a773" address="unix:///run/containerd/s/85d867c8abf5b8814a141b533f51df424cc4c78f5fff25e0fc51c8ad762a3751" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:04:22.998557 kubelet[2904]: E0123 01:04:22.996686 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:23.005227 containerd[1567]: time="2026-01-23T01:04:23.005036715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q9nkf,Uid:be6e6546-540b-4a5f-934f-0dcc8f653eb0,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:23.159160 kubelet[2904]: E0123 01:04:23.154728 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:23.159160 kubelet[2904]: W0123 01:04:23.158867 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:23.159160 kubelet[2904]: E0123 01:04:23.158905 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:23.283741 containerd[1567]: time="2026-01-23T01:04:23.283381217Z" level=info msg="connecting to shim 391281fb638276c5992fd22ab62d15a77986716f45f62cc154de5a194ea79a23" address="unix:///run/containerd/s/4af3a0aaf1553df02ec19a158bc55ea4b6b641df643b460d93e1b4a2f8d7b125" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:04:23.339395 systemd[1]: Started cri-containerd-9efe925c8cd60e8c5aa5e5856f9cb69df36f622bed2a008d0f9a5d8f95a4a773.scope - libcontainer container 9efe925c8cd60e8c5aa5e5856f9cb69df36f622bed2a008d0f9a5d8f95a4a773. Jan 23 01:04:23.552062 update_engine[1548]: I20260123 01:04:23.542753 1548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 01:04:23.552062 update_engine[1548]: I20260123 01:04:23.551061 1548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 01:04:23.552062 update_engine[1548]: I20260123 01:04:23.551997 1548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 01:04:23.573741 update_engine[1548]: E20260123 01:04:23.573685 1548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 01:04:23.581633 update_engine[1548]: I20260123 01:04:23.576659 1548 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 01:04:23.581633 update_engine[1548]: I20260123 01:04:23.576696 1548 omaha_request_action.cc:617] Omaha request response: Jan 23 01:04:23.581633 update_engine[1548]: E20260123 01:04:23.576998 1548 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 23 01:04:23.581633 update_engine[1548]: I20260123 01:04:23.577210 1548 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 23 01:04:23.581633 update_engine[1548]: I20260123 01:04:23.577226 1548 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 01:04:23.581633 update_engine[1548]: I20260123 01:04:23.577237 1548 update_attempter.cc:306] Processing Done. Jan 23 01:04:23.581633 update_engine[1548]: E20260123 01:04:23.577361 1548 update_attempter.cc:619] Update failed. Jan 23 01:04:23.581633 update_engine[1548]: I20260123 01:04:23.577474 1548 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 23 01:04:23.581633 update_engine[1548]: I20260123 01:04:23.577491 1548 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 23 01:04:23.581633 update_engine[1548]: I20260123 01:04:23.577502 1548 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 23 01:04:23.597403 update_engine[1548]: I20260123 01:04:23.593964 1548 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 01:04:23.597403 update_engine[1548]: I20260123 01:04:23.594175 1548 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 01:04:23.597403 update_engine[1548]: I20260123 01:04:23.594194 1548 omaha_request_action.cc:272] Request: Jan 23 01:04:23.597403 update_engine[1548]: Jan 23 01:04:23.597403 update_engine[1548]: Jan 23 01:04:23.597403 update_engine[1548]: Jan 23 01:04:23.597403 update_engine[1548]: Jan 23 01:04:23.597403 update_engine[1548]: Jan 23 01:04:23.597403 update_engine[1548]: Jan 23 01:04:23.597403 update_engine[1548]: I20260123 01:04:23.594206 1548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 01:04:23.597403 update_engine[1548]: I20260123 01:04:23.594246 1548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 01:04:23.597403 update_engine[1548]: I20260123 01:04:23.596951 1548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 01:04:23.598017 locksmithd[1613]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 23 01:04:23.598447 systemd[1]: Started cri-containerd-391281fb638276c5992fd22ab62d15a77986716f45f62cc154de5a194ea79a23.scope - libcontainer container 391281fb638276c5992fd22ab62d15a77986716f45f62cc154de5a194ea79a23. Jan 23 01:04:23.628715 update_engine[1548]: E20260123 01:04:23.627985 1548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 01:04:23.628715 update_engine[1548]: I20260123 01:04:23.628293 1548 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 01:04:23.628715 update_engine[1548]: I20260123 01:04:23.628319 1548 omaha_request_action.cc:617] Omaha request response: Jan 23 01:04:23.628715 update_engine[1548]: I20260123 01:04:23.628331 1548 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 01:04:23.628715 update_engine[1548]: I20260123 01:04:23.628340 1548 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 01:04:23.628715 update_engine[1548]: I20260123 01:04:23.628351 1548 update_attempter.cc:306] Processing Done. Jan 23 01:04:23.628715 update_engine[1548]: I20260123 01:04:23.628363 1548 update_attempter.cc:310] Error event sent. Jan 23 01:04:23.628715 update_engine[1548]: I20260123 01:04:23.628480 1548 update_check_scheduler.cc:74] Next update check in 47m9s Jan 23 01:04:23.647959 locksmithd[1613]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 23 01:04:23.865543 containerd[1567]: time="2026-01-23T01:04:23.861396961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-574c85cf64-kstz2,Uid:62c95a0a-a81e-4e5a-8918-dab5b64b362f,Namespace:calico-system,Attempt:0,} returns sandbox id \"9efe925c8cd60e8c5aa5e5856f9cb69df36f622bed2a008d0f9a5d8f95a4a773\"" Jan 23 01:04:23.940082 kubelet[2904]: E0123 01:04:23.937649 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:23.971461 containerd[1567]: time="2026-01-23T01:04:23.971407261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 01:04:24.169937 containerd[1567]: time="2026-01-23T01:04:24.167438584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q9nkf,Uid:be6e6546-540b-4a5f-934f-0dcc8f653eb0,Namespace:calico-system,Attempt:0,} returns sandbox id \"391281fb638276c5992fd22ab62d15a77986716f45f62cc154de5a194ea79a23\"" Jan 23 01:04:24.173283 kubelet[2904]: E0123 01:04:24.172573 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:24.565622 kubelet[2904]: E0123 01:04:24.563188 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:04:25.499968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2947285429.mount: Deactivated successfully. Jan 23 01:04:26.565006 kubelet[2904]: E0123 01:04:26.561236 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:04:28.562088 kubelet[2904]: E0123 01:04:28.562019 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:04:28.934535 containerd[1567]: time="2026-01-23T01:04:28.933927285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:28.937582 containerd[1567]: time="2026-01-23T01:04:28.937264619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 23 01:04:28.940264 containerd[1567]: time="2026-01-23T01:04:28.940097443Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:28.948605 containerd[1567]: time="2026-01-23T01:04:28.948496093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:28.949505 containerd[1567]: time="2026-01-23T01:04:28.949402552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.976012332s" Jan 23 01:04:28.949505 containerd[1567]: time="2026-01-23T01:04:28.949444560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 01:04:28.964111 containerd[1567]: time="2026-01-23T01:04:28.963648511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 01:04:29.052824 containerd[1567]: time="2026-01-23T01:04:29.051969627Z" level=info msg="CreateContainer within sandbox \"9efe925c8cd60e8c5aa5e5856f9cb69df36f622bed2a008d0f9a5d8f95a4a773\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 01:04:29.073721 containerd[1567]: time="2026-01-23T01:04:29.071909325Z" level=info msg="Container d4aea0359f81054ab7ca02ea3fac18ca302c224696d7e0c31cdc0659d56d7e47: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:04:29.127644 containerd[1567]: time="2026-01-23T01:04:29.127533053Z" level=info msg="CreateContainer within sandbox \"9efe925c8cd60e8c5aa5e5856f9cb69df36f622bed2a008d0f9a5d8f95a4a773\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d4aea0359f81054ab7ca02ea3fac18ca302c224696d7e0c31cdc0659d56d7e47\"" Jan 23 01:04:29.129932 containerd[1567]: time="2026-01-23T01:04:29.129897712Z" level=info msg="StartContainer for \"d4aea0359f81054ab7ca02ea3fac18ca302c224696d7e0c31cdc0659d56d7e47\"" Jan 23 01:04:29.132571 containerd[1567]: time="2026-01-23T01:04:29.132433801Z" level=info msg="connecting to shim d4aea0359f81054ab7ca02ea3fac18ca302c224696d7e0c31cdc0659d56d7e47" address="unix:///run/containerd/s/85d867c8abf5b8814a141b533f51df424cc4c78f5fff25e0fc51c8ad762a3751" protocol=ttrpc version=3 Jan 23 01:04:29.268680 systemd[1]: Started cri-containerd-d4aea0359f81054ab7ca02ea3fac18ca302c224696d7e0c31cdc0659d56d7e47.scope - libcontainer container d4aea0359f81054ab7ca02ea3fac18ca302c224696d7e0c31cdc0659d56d7e47. Jan 23 01:04:29.573395 containerd[1567]: time="2026-01-23T01:04:29.572520666Z" level=info msg="StartContainer for \"d4aea0359f81054ab7ca02ea3fac18ca302c224696d7e0c31cdc0659d56d7e47\" returns successfully" Jan 23 01:04:30.219411 kubelet[2904]: E0123 01:04:30.219155 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:30.335719 kubelet[2904]: E0123 01:04:30.335552 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.335979 kubelet[2904]: W0123 01:04:30.335854 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.335979 kubelet[2904]: E0123 01:04:30.335890 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.336445 kubelet[2904]: E0123 01:04:30.336303 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.336445 kubelet[2904]: W0123 01:04:30.336371 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.336445 kubelet[2904]: E0123 01:04:30.336392 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.336895 kubelet[2904]: E0123 01:04:30.336685 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.336961 kubelet[2904]: W0123 01:04:30.336917 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.336961 kubelet[2904]: E0123 01:04:30.336937 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.337379 kubelet[2904]: E0123 01:04:30.337336 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.337379 kubelet[2904]: W0123 01:04:30.337351 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.337379 kubelet[2904]: E0123 01:04:30.337366 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.337950 kubelet[2904]: E0123 01:04:30.337661 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.337950 kubelet[2904]: W0123 01:04:30.337742 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.337950 kubelet[2904]: E0123 01:04:30.337866 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.338158 kubelet[2904]: E0123 01:04:30.338121 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.338158 kubelet[2904]: W0123 01:04:30.338131 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.338158 kubelet[2904]: E0123 01:04:30.338142 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.338599 kubelet[2904]: E0123 01:04:30.338471 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.338599 kubelet[2904]: W0123 01:04:30.338599 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.338926 kubelet[2904]: E0123 01:04:30.338614 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.339273 kubelet[2904]: E0123 01:04:30.339098 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.339273 kubelet[2904]: W0123 01:04:30.339172 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.339273 kubelet[2904]: E0123 01:04:30.339187 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.362614 kubelet[2904]: E0123 01:04:30.362566 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.363568 kubelet[2904]: W0123 01:04:30.363535 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.364042 kubelet[2904]: E0123 01:04:30.364023 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.366128 kubelet[2904]: E0123 01:04:30.366103 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.366959 kubelet[2904]: W0123 01:04:30.366934 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.368125 kubelet[2904]: E0123 01:04:30.368101 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.372863 kubelet[2904]: E0123 01:04:30.372646 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.372863 kubelet[2904]: W0123 01:04:30.372667 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.372863 kubelet[2904]: E0123 01:04:30.372683 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.374680 kubelet[2904]: E0123 01:04:30.374431 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.374680 kubelet[2904]: W0123 01:04:30.374483 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.374680 kubelet[2904]: E0123 01:04:30.374501 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.380077 kubelet[2904]: E0123 01:04:30.377012 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.380077 kubelet[2904]: W0123 01:04:30.377071 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.380077 kubelet[2904]: E0123 01:04:30.377089 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.380077 kubelet[2904]: E0123 01:04:30.377609 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.380077 kubelet[2904]: W0123 01:04:30.377624 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.380077 kubelet[2904]: E0123 01:04:30.377637 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.380077 kubelet[2904]: E0123 01:04:30.378505 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.380077 kubelet[2904]: W0123 01:04:30.378518 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.380077 kubelet[2904]: E0123 01:04:30.378534 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.411652 kubelet[2904]: I0123 01:04:30.411577 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-574c85cf64-kstz2" podStartSLOduration=4.397939364 podStartE2EDuration="9.410577819s" podCreationTimestamp="2026-01-23 01:04:21 +0000 UTC" firstStartedPulling="2026-01-23 01:04:23.950063016 +0000 UTC m=+87.295559667" lastFinishedPulling="2026-01-23 01:04:28.962701462 +0000 UTC m=+92.308198122" observedRunningTime="2026-01-23 01:04:30.358216651 +0000 UTC m=+93.703713302" watchObservedRunningTime="2026-01-23 01:04:30.410577819 +0000 UTC m=+93.756074469" Jan 23 01:04:30.426088 kubelet[2904]: E0123 01:04:30.425997 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.426088 kubelet[2904]: W0123 01:04:30.426078 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.426364 kubelet[2904]: E0123 01:04:30.426110 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.428943 kubelet[2904]: E0123 01:04:30.428560 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.428943 kubelet[2904]: W0123 01:04:30.428588 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.428943 kubelet[2904]: E0123 01:04:30.428614 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.432696 kubelet[2904]: E0123 01:04:30.432557 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.432696 kubelet[2904]: W0123 01:04:30.432582 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.432696 kubelet[2904]: E0123 01:04:30.432605 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.440449 kubelet[2904]: E0123 01:04:30.440071 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.440449 kubelet[2904]: W0123 01:04:30.440153 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.440449 kubelet[2904]: E0123 01:04:30.440180 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.441630 kubelet[2904]: E0123 01:04:30.441567 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.441697 kubelet[2904]: W0123 01:04:30.441584 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.441749 kubelet[2904]: E0123 01:04:30.441710 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.442593 kubelet[2904]: E0123 01:04:30.442542 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.442593 kubelet[2904]: W0123 01:04:30.442558 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.442593 kubelet[2904]: E0123 01:04:30.442573 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.443734 kubelet[2904]: E0123 01:04:30.443299 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.443734 kubelet[2904]: W0123 01:04:30.443312 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.443734 kubelet[2904]: E0123 01:04:30.443327 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.444001 kubelet[2904]: E0123 01:04:30.443969 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.444001 kubelet[2904]: W0123 01:04:30.443982 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.444001 kubelet[2904]: E0123 01:04:30.443995 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.447320 kubelet[2904]: E0123 01:04:30.444709 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.447320 kubelet[2904]: W0123 01:04:30.444721 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.447320 kubelet[2904]: E0123 01:04:30.445024 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.447320 kubelet[2904]: E0123 01:04:30.446651 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.447320 kubelet[2904]: W0123 01:04:30.446664 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.447320 kubelet[2904]: E0123 01:04:30.446677 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.447320 kubelet[2904]: E0123 01:04:30.447115 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.447320 kubelet[2904]: W0123 01:04:30.447128 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.447320 kubelet[2904]: E0123 01:04:30.447140 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.448464 kubelet[2904]: E0123 01:04:30.447926 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.448464 kubelet[2904]: W0123 01:04:30.447997 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.448464 kubelet[2904]: E0123 01:04:30.448011 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.449421 kubelet[2904]: E0123 01:04:30.449355 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.449421 kubelet[2904]: W0123 01:04:30.449374 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.449421 kubelet[2904]: E0123 01:04:30.449387 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.451756 kubelet[2904]: E0123 01:04:30.451652 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.451756 kubelet[2904]: W0123 01:04:30.451669 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.453456 kubelet[2904]: E0123 01:04:30.453088 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.455441 kubelet[2904]: E0123 01:04:30.455126 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.455441 kubelet[2904]: W0123 01:04:30.455202 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.455441 kubelet[2904]: E0123 01:04:30.455282 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.464671 kubelet[2904]: E0123 01:04:30.464568 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.464671 kubelet[2904]: W0123 01:04:30.464588 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.464671 kubelet[2904]: E0123 01:04:30.464609 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.474027 kubelet[2904]: E0123 01:04:30.471950 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.474027 kubelet[2904]: W0123 01:04:30.472027 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.474027 kubelet[2904]: E0123 01:04:30.472049 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.477631 kubelet[2904]: E0123 01:04:30.477522 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.477631 kubelet[2904]: W0123 01:04:30.477595 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.477631 kubelet[2904]: E0123 01:04:30.477619 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.562670 kubelet[2904]: E0123 01:04:30.562518 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:04:30.564534 kubelet[2904]: E0123 01:04:30.564430 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:30.590526 kubelet[2904]: E0123 01:04:30.586497 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.590526 kubelet[2904]: W0123 01:04:30.586725 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.590526 kubelet[2904]: E0123 01:04:30.586756 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.590526 kubelet[2904]: E0123 01:04:30.589751 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.590526 kubelet[2904]: W0123 01:04:30.590216 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.590526 kubelet[2904]: E0123 01:04:30.590466 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.592568 kubelet[2904]: E0123 01:04:30.591108 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.592568 kubelet[2904]: W0123 01:04:30.591123 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.592568 kubelet[2904]: E0123 01:04:30.591140 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.592568 kubelet[2904]: E0123 01:04:30.592548 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.592568 kubelet[2904]: W0123 01:04:30.592562 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.592746 kubelet[2904]: E0123 01:04:30.592580 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.619734 kubelet[2904]: E0123 01:04:30.594590 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.619990 kubelet[2904]: W0123 01:04:30.619734 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.619990 kubelet[2904]: E0123 01:04:30.619878 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.622401 kubelet[2904]: E0123 01:04:30.622378 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.622560 kubelet[2904]: W0123 01:04:30.622541 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.623181 kubelet[2904]: E0123 01:04:30.622941 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.624636 kubelet[2904]: E0123 01:04:30.624568 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.624754 kubelet[2904]: W0123 01:04:30.624723 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.625309 kubelet[2904]: E0123 01:04:30.625145 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.627036 kubelet[2904]: E0123 01:04:30.626603 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.627036 kubelet[2904]: W0123 01:04:30.626620 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.627036 kubelet[2904]: E0123 01:04:30.626637 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.627926 kubelet[2904]: E0123 01:04:30.627906 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.628034 kubelet[2904]: W0123 01:04:30.628017 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.628116 kubelet[2904]: E0123 01:04:30.628098 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.630496 kubelet[2904]: E0123 01:04:30.630389 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.630958 kubelet[2904]: W0123 01:04:30.630600 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.631190 kubelet[2904]: E0123 01:04:30.631167 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.633678 kubelet[2904]: E0123 01:04:30.633205 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.633678 kubelet[2904]: W0123 01:04:30.633296 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.633678 kubelet[2904]: E0123 01:04:30.633316 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.634102 kubelet[2904]: E0123 01:04:30.634084 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.634450 kubelet[2904]: W0123 01:04:30.634167 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.634450 kubelet[2904]: E0123 01:04:30.634189 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.637536 kubelet[2904]: E0123 01:04:30.637207 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.637536 kubelet[2904]: W0123 01:04:30.637299 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.637536 kubelet[2904]: E0123 01:04:30.637319 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.640487 kubelet[2904]: E0123 01:04:30.640468 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.640597 kubelet[2904]: W0123 01:04:30.640581 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.640679 kubelet[2904]: E0123 01:04:30.640664 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:30.643901 kubelet[2904]: E0123 01:04:30.643876 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:30.644491 kubelet[2904]: W0123 01:04:30.644296 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:30.644491 kubelet[2904]: E0123 01:04:30.644325 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.026892 containerd[1567]: time="2026-01-23T01:04:31.022676933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:31.030891 containerd[1567]: time="2026-01-23T01:04:31.028071016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 23 01:04:31.037898 containerd[1567]: time="2026-01-23T01:04:31.036044387Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:31.045743 containerd[1567]: time="2026-01-23T01:04:31.045690922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:31.051877 containerd[1567]: time="2026-01-23T01:04:31.050339101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.086640416s" Jan 23 01:04:31.051877 containerd[1567]: time="2026-01-23T01:04:31.050668543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 01:04:31.104048 containerd[1567]: time="2026-01-23T01:04:31.103907437Z" level=info msg="CreateContainer within sandbox \"391281fb638276c5992fd22ab62d15a77986716f45f62cc154de5a194ea79a23\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 01:04:31.165003 containerd[1567]: time="2026-01-23T01:04:31.164953115Z" level=info msg="Container a1ec4550a772dbae52848d42362405d4be0d07d4ddb3a8491f3788cee4c17f00: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:04:31.186715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount589882187.mount: Deactivated successfully. Jan 23 01:04:31.220633 containerd[1567]: time="2026-01-23T01:04:31.220330082Z" level=info msg="CreateContainer within sandbox \"391281fb638276c5992fd22ab62d15a77986716f45f62cc154de5a194ea79a23\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a1ec4550a772dbae52848d42362405d4be0d07d4ddb3a8491f3788cee4c17f00\"" Jan 23 01:04:31.223124 containerd[1567]: time="2026-01-23T01:04:31.222980592Z" level=info msg="StartContainer for \"a1ec4550a772dbae52848d42362405d4be0d07d4ddb3a8491f3788cee4c17f00\"" Jan 23 01:04:31.227612 containerd[1567]: time="2026-01-23T01:04:31.227039538Z" level=info msg="connecting to shim a1ec4550a772dbae52848d42362405d4be0d07d4ddb3a8491f3788cee4c17f00" address="unix:///run/containerd/s/4af3a0aaf1553df02ec19a158bc55ea4b6b641df643b460d93e1b4a2f8d7b125" protocol=ttrpc version=3 Jan 23 01:04:31.251872 kubelet[2904]: E0123 01:04:31.251585 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:31.256967 kubelet[2904]: E0123 01:04:31.256719 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.256967 kubelet[2904]: W0123 01:04:31.256745 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.256967 kubelet[2904]: E0123 01:04:31.256878 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.260040 kubelet[2904]: E0123 01:04:31.259924 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.260040 kubelet[2904]: W0123 01:04:31.259944 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.260040 kubelet[2904]: E0123 01:04:31.259967 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.261975 kubelet[2904]: E0123 01:04:31.261695 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.267886 kubelet[2904]: W0123 01:04:31.267387 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.267886 kubelet[2904]: E0123 01:04:31.267427 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.270560 kubelet[2904]: E0123 01:04:31.270537 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.270655 kubelet[2904]: W0123 01:04:31.270638 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.272327 kubelet[2904]: E0123 01:04:31.270741 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.273047 kubelet[2904]: E0123 01:04:31.273029 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.273112 kubelet[2904]: W0123 01:04:31.273099 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.273164 kubelet[2904]: E0123 01:04:31.273152 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.273677 kubelet[2904]: E0123 01:04:31.273661 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.273747 kubelet[2904]: W0123 01:04:31.273735 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.274052 kubelet[2904]: E0123 01:04:31.273928 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.276504 kubelet[2904]: E0123 01:04:31.276364 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.276504 kubelet[2904]: W0123 01:04:31.276383 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.276504 kubelet[2904]: E0123 01:04:31.276405 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.283494 kubelet[2904]: E0123 01:04:31.283165 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.283494 kubelet[2904]: W0123 01:04:31.283183 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.283494 kubelet[2904]: E0123 01:04:31.283198 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.301040 kubelet[2904]: E0123 01:04:31.300159 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.301040 kubelet[2904]: W0123 01:04:31.300356 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.301040 kubelet[2904]: E0123 01:04:31.300558 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.302124 kubelet[2904]: E0123 01:04:31.301555 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.302124 kubelet[2904]: W0123 01:04:31.301568 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.302124 kubelet[2904]: E0123 01:04:31.301585 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.302124 kubelet[2904]: E0123 01:04:31.301950 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.302124 kubelet[2904]: W0123 01:04:31.301962 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.302124 kubelet[2904]: E0123 01:04:31.301975 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.302601 kubelet[2904]: E0123 01:04:31.302442 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.302601 kubelet[2904]: W0123 01:04:31.302454 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.302601 kubelet[2904]: E0123 01:04:31.302468 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.308164 kubelet[2904]: E0123 01:04:31.303235 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.308164 kubelet[2904]: W0123 01:04:31.303400 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.308164 kubelet[2904]: E0123 01:04:31.303416 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.308164 kubelet[2904]: E0123 01:04:31.303909 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.308164 kubelet[2904]: W0123 01:04:31.303924 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.308164 kubelet[2904]: E0123 01:04:31.303936 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.308164 kubelet[2904]: E0123 01:04:31.304236 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.308164 kubelet[2904]: W0123 01:04:31.304323 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.308164 kubelet[2904]: E0123 01:04:31.304336 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.308164 kubelet[2904]: E0123 01:04:31.305714 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.308758 kubelet[2904]: W0123 01:04:31.306161 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.308758 kubelet[2904]: E0123 01:04:31.306178 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.308758 kubelet[2904]: E0123 01:04:31.306978 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.308758 kubelet[2904]: W0123 01:04:31.306993 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.308758 kubelet[2904]: E0123 01:04:31.307006 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.308758 kubelet[2904]: E0123 01:04:31.308943 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.308758 kubelet[2904]: W0123 01:04:31.308958 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.308758 kubelet[2904]: E0123 01:04:31.308970 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.308758 kubelet[2904]: E0123 01:04:31.310033 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.308758 kubelet[2904]: W0123 01:04:31.310049 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.314054 kubelet[2904]: E0123 01:04:31.310065 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.314054 kubelet[2904]: E0123 01:04:31.311056 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.314054 kubelet[2904]: W0123 01:04:31.311068 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.314054 kubelet[2904]: E0123 01:04:31.311081 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.314054 kubelet[2904]: E0123 01:04:31.313677 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.314054 kubelet[2904]: W0123 01:04:31.313688 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.314054 kubelet[2904]: E0123 01:04:31.313702 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.318960 kubelet[2904]: E0123 01:04:31.315107 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.318960 kubelet[2904]: W0123 01:04:31.315125 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.318960 kubelet[2904]: E0123 01:04:31.315140 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.318960 kubelet[2904]: E0123 01:04:31.317055 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.318960 kubelet[2904]: W0123 01:04:31.317070 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.318960 kubelet[2904]: E0123 01:04:31.317084 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.320087 kubelet[2904]: E0123 01:04:31.320059 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.320087 kubelet[2904]: W0123 01:04:31.320071 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.320087 kubelet[2904]: E0123 01:04:31.320084 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.322152 kubelet[2904]: E0123 01:04:31.321899 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.322152 kubelet[2904]: W0123 01:04:31.321949 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.322152 kubelet[2904]: E0123 01:04:31.321963 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.326756 kubelet[2904]: E0123 01:04:31.322234 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.326756 kubelet[2904]: W0123 01:04:31.322421 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.326756 kubelet[2904]: E0123 01:04:31.322436 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.326756 kubelet[2904]: E0123 01:04:31.323900 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.326756 kubelet[2904]: W0123 01:04:31.323916 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.326756 kubelet[2904]: E0123 01:04:31.323930 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.326756 kubelet[2904]: E0123 01:04:31.327726 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.326756 kubelet[2904]: W0123 01:04:31.327739 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.326756 kubelet[2904]: E0123 01:04:31.327751 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.325526 systemd[1]: Started cri-containerd-a1ec4550a772dbae52848d42362405d4be0d07d4ddb3a8491f3788cee4c17f00.scope - libcontainer container a1ec4550a772dbae52848d42362405d4be0d07d4ddb3a8491f3788cee4c17f00. Jan 23 01:04:31.331952 kubelet[2904]: E0123 01:04:31.331607 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.331952 kubelet[2904]: W0123 01:04:31.331681 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.331952 kubelet[2904]: E0123 01:04:31.331700 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.332167 kubelet[2904]: E0123 01:04:31.332122 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.332167 kubelet[2904]: W0123 01:04:31.332138 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.332167 kubelet[2904]: E0123 01:04:31.332151 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.333485 kubelet[2904]: E0123 01:04:31.332576 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.333485 kubelet[2904]: W0123 01:04:31.332588 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.333485 kubelet[2904]: E0123 01:04:31.332604 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.340968 kubelet[2904]: E0123 01:04:31.340931 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.340968 kubelet[2904]: W0123 01:04:31.340948 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.340968 kubelet[2904]: E0123 01:04:31.340964 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.342903 kubelet[2904]: E0123 01:04:31.342167 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:31.342903 kubelet[2904]: W0123 01:04:31.342429 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:31.342903 kubelet[2904]: E0123 01:04:31.342452 2904 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:31.591124 containerd[1567]: time="2026-01-23T01:04:31.585564570Z" level=info msg="StartContainer for \"a1ec4550a772dbae52848d42362405d4be0d07d4ddb3a8491f3788cee4c17f00\" returns successfully" Jan 23 01:04:31.651656 systemd[1]: cri-containerd-a1ec4550a772dbae52848d42362405d4be0d07d4ddb3a8491f3788cee4c17f00.scope: Deactivated successfully. Jan 23 01:04:31.656651 containerd[1567]: time="2026-01-23T01:04:31.656316420Z" level=info msg="received container exit event container_id:\"a1ec4550a772dbae52848d42362405d4be0d07d4ddb3a8491f3788cee4c17f00\" id:\"a1ec4550a772dbae52848d42362405d4be0d07d4ddb3a8491f3788cee4c17f00\" pid:3787 exited_at:{seconds:1769130271 nanos:655186609}" Jan 23 01:04:31.766038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1ec4550a772dbae52848d42362405d4be0d07d4ddb3a8491f3788cee4c17f00-rootfs.mount: Deactivated successfully. Jan 23 01:04:32.269497 kubelet[2904]: E0123 01:04:32.268099 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:32.275001 kubelet[2904]: E0123 01:04:32.274684 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:32.290228 containerd[1567]: time="2026-01-23T01:04:32.288590380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 01:04:32.563078 kubelet[2904]: E0123 01:04:32.562464 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:04:34.562942 kubelet[2904]: E0123 01:04:34.561073 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:04:36.598909 kubelet[2904]: E0123 01:04:36.598659 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:04:38.563744 kubelet[2904]: E0123 01:04:38.561531 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:04:40.529681 containerd[1567]: time="2026-01-23T01:04:40.529424120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:40.531350 containerd[1567]: time="2026-01-23T01:04:40.531302638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 01:04:40.534295 containerd[1567]: time="2026-01-23T01:04:40.533604514Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:40.540957 containerd[1567]: time="2026-01-23T01:04:40.540201066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:40.541185 containerd[1567]: time="2026-01-23T01:04:40.540999079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 8.23830773s" Jan 23 01:04:40.541185 containerd[1567]: time="2026-01-23T01:04:40.541040077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 01:04:40.554289 containerd[1567]: time="2026-01-23T01:04:40.554123216Z" level=info msg="CreateContainer within sandbox \"391281fb638276c5992fd22ab62d15a77986716f45f62cc154de5a194ea79a23\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 01:04:40.573047 kubelet[2904]: E0123 01:04:40.572373 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:04:40.604263 containerd[1567]: time="2026-01-23T01:04:40.604071195Z" level=info msg="Container 66e25a20bff7354428bf3d3ef228873cc45fe57147b08e1713e926188b513657: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:04:40.628586 containerd[1567]: time="2026-01-23T01:04:40.628405917Z" level=info msg="CreateContainer within sandbox \"391281fb638276c5992fd22ab62d15a77986716f45f62cc154de5a194ea79a23\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"66e25a20bff7354428bf3d3ef228873cc45fe57147b08e1713e926188b513657\"" Jan 23 01:04:40.630556 containerd[1567]: time="2026-01-23T01:04:40.630377279Z" level=info msg="StartContainer for \"66e25a20bff7354428bf3d3ef228873cc45fe57147b08e1713e926188b513657\"" Jan 23 01:04:40.634954 containerd[1567]: time="2026-01-23T01:04:40.634364053Z" level=info msg="connecting to shim 66e25a20bff7354428bf3d3ef228873cc45fe57147b08e1713e926188b513657" address="unix:///run/containerd/s/4af3a0aaf1553df02ec19a158bc55ea4b6b641df643b460d93e1b4a2f8d7b125" protocol=ttrpc version=3 Jan 23 01:04:40.714546 systemd[1]: Started cri-containerd-66e25a20bff7354428bf3d3ef228873cc45fe57147b08e1713e926188b513657.scope - libcontainer container 66e25a20bff7354428bf3d3ef228873cc45fe57147b08e1713e926188b513657. Jan 23 01:04:40.963959 containerd[1567]: time="2026-01-23T01:04:40.962221849Z" level=info msg="StartContainer for \"66e25a20bff7354428bf3d3ef228873cc45fe57147b08e1713e926188b513657\" returns successfully" Jan 23 01:04:41.426343 kubelet[2904]: E0123 01:04:41.426022 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:42.442234 kubelet[2904]: E0123 01:04:42.442114 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:42.564042 kubelet[2904]: E0123 01:04:42.561451 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:04:44.563011 kubelet[2904]: E0123 01:04:44.562714 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:04:44.725447 systemd[1]: cri-containerd-66e25a20bff7354428bf3d3ef228873cc45fe57147b08e1713e926188b513657.scope: Deactivated successfully. Jan 23 01:04:44.735147 containerd[1567]: time="2026-01-23T01:04:44.734096738Z" level=info msg="received container exit event container_id:\"66e25a20bff7354428bf3d3ef228873cc45fe57147b08e1713e926188b513657\" id:\"66e25a20bff7354428bf3d3ef228873cc45fe57147b08e1713e926188b513657\" pid:3856 exited_at:{seconds:1769130284 nanos:729916685}" Jan 23 01:04:44.729398 systemd[1]: cri-containerd-66e25a20bff7354428bf3d3ef228873cc45fe57147b08e1713e926188b513657.scope: Consumed 2.941s CPU time, 180.6M memory peak, 3.6M read from disk, 171.3M written to disk. Jan 23 01:04:44.746078 containerd[1567]: time="2026-01-23T01:04:44.745561440Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:04:44.831222 kubelet[2904]: I0123 01:04:44.830437 2904 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 01:04:45.053443 kubelet[2904]: I0123 01:04:45.053389 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29e4782e-15b5-4470-9ba4-cbf36b95c79b-config-volume\") pod \"coredns-674b8bbfcf-thkld\" (UID: \"29e4782e-15b5-4470-9ba4-cbf36b95c79b\") " pod="kube-system/coredns-674b8bbfcf-thkld" Jan 23 01:04:45.054755 kubelet[2904]: I0123 01:04:45.054667 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7kv4\" (UniqueName: \"kubernetes.io/projected/29e4782e-15b5-4470-9ba4-cbf36b95c79b-kube-api-access-r7kv4\") pod \"coredns-674b8bbfcf-thkld\" (UID: \"29e4782e-15b5-4470-9ba4-cbf36b95c79b\") " pod="kube-system/coredns-674b8bbfcf-thkld" Jan 23 01:04:45.065220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66e25a20bff7354428bf3d3ef228873cc45fe57147b08e1713e926188b513657-rootfs.mount: Deactivated successfully. Jan 23 01:04:45.128575 systemd[1]: Created slice kubepods-burstable-pod29e4782e_15b5_4470_9ba4_cbf36b95c79b.slice - libcontainer container kubepods-burstable-pod29e4782e_15b5_4470_9ba4_cbf36b95c79b.slice. Jan 23 01:04:45.144388 systemd[1]: Created slice kubepods-burstable-pod26ba25fb_0e8b_48e9_998f_30a0f733f697.slice - libcontainer container kubepods-burstable-pod26ba25fb_0e8b_48e9_998f_30a0f733f697.slice. Jan 23 01:04:45.162064 kubelet[2904]: I0123 01:04:45.155340 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26ba25fb-0e8b-48e9-998f-30a0f733f697-config-volume\") pod \"coredns-674b8bbfcf-p82rz\" (UID: \"26ba25fb-0e8b-48e9-998f-30a0f733f697\") " pod="kube-system/coredns-674b8bbfcf-p82rz" Jan 23 01:04:45.167549 kubelet[2904]: I0123 01:04:45.166440 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77lt6\" (UniqueName: \"kubernetes.io/projected/26ba25fb-0e8b-48e9-998f-30a0f733f697-kube-api-access-77lt6\") pod \"coredns-674b8bbfcf-p82rz\" (UID: \"26ba25fb-0e8b-48e9-998f-30a0f733f697\") " pod="kube-system/coredns-674b8bbfcf-p82rz" Jan 23 01:04:45.269669 kubelet[2904]: I0123 01:04:45.268247 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d54e261-de28-4a61-bcdc-0ebb829e113e-config\") pod \"goldmane-666569f655-5hvbp\" (UID: \"4d54e261-de28-4a61-bcdc-0ebb829e113e\") " pod="calico-system/goldmane-666569f655-5hvbp" Jan 23 01:04:45.269669 kubelet[2904]: I0123 01:04:45.268384 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4d54e261-de28-4a61-bcdc-0ebb829e113e-goldmane-key-pair\") pod \"goldmane-666569f655-5hvbp\" (UID: \"4d54e261-de28-4a61-bcdc-0ebb829e113e\") " pod="calico-system/goldmane-666569f655-5hvbp" Jan 23 01:04:45.269669 kubelet[2904]: I0123 01:04:45.268673 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d54e261-de28-4a61-bcdc-0ebb829e113e-goldmane-ca-bundle\") pod \"goldmane-666569f655-5hvbp\" (UID: \"4d54e261-de28-4a61-bcdc-0ebb829e113e\") " pod="calico-system/goldmane-666569f655-5hvbp" Jan 23 01:04:45.269669 kubelet[2904]: I0123 01:04:45.269054 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkbmv\" (UniqueName: \"kubernetes.io/projected/4d54e261-de28-4a61-bcdc-0ebb829e113e-kube-api-access-tkbmv\") pod \"goldmane-666569f655-5hvbp\" (UID: \"4d54e261-de28-4a61-bcdc-0ebb829e113e\") " pod="calico-system/goldmane-666569f655-5hvbp" Jan 23 01:04:45.369514 kubelet[2904]: I0123 01:04:45.369465 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbdvw\" (UniqueName: \"kubernetes.io/projected/5ea72ad9-04e5-48e1-a1f3-bd44567b901e-kube-api-access-rbdvw\") pod \"calico-kube-controllers-5dcc89fd94-gvlr2\" (UID: \"5ea72ad9-04e5-48e1-a1f3-bd44567b901e\") " pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" Jan 23 01:04:45.372296 kubelet[2904]: I0123 01:04:45.370348 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ea72ad9-04e5-48e1-a1f3-bd44567b901e-tigera-ca-bundle\") pod \"calico-kube-controllers-5dcc89fd94-gvlr2\" (UID: \"5ea72ad9-04e5-48e1-a1f3-bd44567b901e\") " pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" Jan 23 01:04:45.382600 systemd[1]: Created slice kubepods-besteffort-pod4d54e261_de28_4a61_bcdc_0ebb829e113e.slice - libcontainer container kubepods-besteffort-pod4d54e261_de28_4a61_bcdc_0ebb829e113e.slice. Jan 23 01:04:45.455346 systemd[1]: Created slice kubepods-besteffort-pod5ea72ad9_04e5_48e1_a1f3_bd44567b901e.slice - libcontainer container kubepods-besteffort-pod5ea72ad9_04e5_48e1_a1f3_bd44567b901e.slice. Jan 23 01:04:45.512581 kubelet[2904]: E0123 01:04:45.512010 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:45.528580 containerd[1567]: time="2026-01-23T01:04:45.528449845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-thkld,Uid:29e4782e-15b5-4470-9ba4-cbf36b95c79b,Namespace:kube-system,Attempt:0,}" Jan 23 01:04:45.557907 kubelet[2904]: I0123 01:04:45.555175 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3c57c36f-e9c4-4469-830b-86d51909b784-calico-apiserver-certs\") pod \"calico-apiserver-6cd579f464-d54m6\" (UID: \"3c57c36f-e9c4-4469-830b-86d51909b784\") " pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" Jan 23 01:04:45.570036 kubelet[2904]: I0123 01:04:45.569991 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/baca7367-8f45-40cf-b782-1ff5b51a0c81-whisker-ca-bundle\") pod \"whisker-6c554c8d6b-phpdd\" (UID: \"baca7367-8f45-40cf-b782-1ff5b51a0c81\") " pod="calico-system/whisker-6c554c8d6b-phpdd" Jan 23 01:04:45.582472 kubelet[2904]: I0123 01:04:45.572089 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2cws\" (UniqueName: \"kubernetes.io/projected/3c57c36f-e9c4-4469-830b-86d51909b784-kube-api-access-l2cws\") pod \"calico-apiserver-6cd579f464-d54m6\" (UID: \"3c57c36f-e9c4-4469-830b-86d51909b784\") " pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" Jan 23 01:04:45.605881 kubelet[2904]: I0123 01:04:45.605244 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/baca7367-8f45-40cf-b782-1ff5b51a0c81-whisker-backend-key-pair\") pod \"whisker-6c554c8d6b-phpdd\" (UID: \"baca7367-8f45-40cf-b782-1ff5b51a0c81\") " pod="calico-system/whisker-6c554c8d6b-phpdd" Jan 23 01:04:45.605881 kubelet[2904]: I0123 01:04:45.605389 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw9hw\" (UniqueName: \"kubernetes.io/projected/baca7367-8f45-40cf-b782-1ff5b51a0c81-kube-api-access-rw9hw\") pod \"whisker-6c554c8d6b-phpdd\" (UID: \"baca7367-8f45-40cf-b782-1ff5b51a0c81\") " pod="calico-system/whisker-6c554c8d6b-phpdd" Jan 23 01:04:45.620717 systemd[1]: Created slice kubepods-besteffort-podbaca7367_8f45_40cf_b782_1ff5b51a0c81.slice - libcontainer container kubepods-besteffort-podbaca7367_8f45_40cf_b782_1ff5b51a0c81.slice. Jan 23 01:04:45.649047 systemd[1]: Created slice kubepods-besteffort-pod3c57c36f_e9c4_4469_830b_86d51909b784.slice - libcontainer container kubepods-besteffort-pod3c57c36f_e9c4_4469_830b_86d51909b784.slice. Jan 23 01:04:45.680558 systemd[1]: Created slice kubepods-besteffort-podaa11cfa4_c767_44e1_bc2c_24c685ae9875.slice - libcontainer container kubepods-besteffort-podaa11cfa4_c767_44e1_bc2c_24c685ae9875.slice. Jan 23 01:04:45.722032 kubelet[2904]: I0123 01:04:45.714345 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa11cfa4-c767-44e1-bc2c-24c685ae9875-calico-apiserver-certs\") pod \"calico-apiserver-6cd579f464-47gkr\" (UID: \"aa11cfa4-c767-44e1-bc2c-24c685ae9875\") " pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" Jan 23 01:04:45.722032 kubelet[2904]: I0123 01:04:45.714456 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7xws\" (UniqueName: \"kubernetes.io/projected/aa11cfa4-c767-44e1-bc2c-24c685ae9875-kube-api-access-p7xws\") pod \"calico-apiserver-6cd579f464-47gkr\" (UID: \"aa11cfa4-c767-44e1-bc2c-24c685ae9875\") " pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" Jan 23 01:04:45.755757 kubelet[2904]: E0123 01:04:45.753110 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:45.765724 containerd[1567]: time="2026-01-23T01:04:45.765596035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p82rz,Uid:26ba25fb-0e8b-48e9-998f-30a0f733f697,Namespace:kube-system,Attempt:0,}" Jan 23 01:04:45.861734 kubelet[2904]: E0123 01:04:45.861339 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:45.866552 containerd[1567]: time="2026-01-23T01:04:45.865261218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5hvbp,Uid:4d54e261-de28-4a61-bcdc-0ebb829e113e,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:45.931335 containerd[1567]: time="2026-01-23T01:04:45.881620204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 01:04:45.933873 containerd[1567]: time="2026-01-23T01:04:45.911881665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcc89fd94-gvlr2,Uid:5ea72ad9-04e5-48e1-a1f3-bd44567b901e,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:45.958986 containerd[1567]: time="2026-01-23T01:04:45.958383406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c554c8d6b-phpdd,Uid:baca7367-8f45-40cf-b782-1ff5b51a0c81,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:46.012523 containerd[1567]: time="2026-01-23T01:04:46.011586417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-d54m6,Uid:3c57c36f-e9c4-4469-830b-86d51909b784,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:04:46.025264 containerd[1567]: time="2026-01-23T01:04:46.024990254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-47gkr,Uid:aa11cfa4-c767-44e1-bc2c-24c685ae9875,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:04:46.609617 systemd[1]: Created slice kubepods-besteffort-pod77cee7a3_d314_42b2_8d1b_22ce21da8d56.slice - libcontainer container kubepods-besteffort-pod77cee7a3_d314_42b2_8d1b_22ce21da8d56.slice. Jan 23 01:04:46.623610 containerd[1567]: time="2026-01-23T01:04:46.622086310Z" level=error msg="Failed to destroy network for sandbox \"3d0d45c00ab648e17fc467107372d9035b5b97b4970414622d5ffb4a23d52cd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.634939 containerd[1567]: time="2026-01-23T01:04:46.634634944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcc89fd94-gvlr2,Uid:5ea72ad9-04e5-48e1-a1f3-bd44567b901e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d0d45c00ab648e17fc467107372d9035b5b97b4970414622d5ffb4a23d52cd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.635383 systemd[1]: run-netns-cni\x2d54433647\x2d763b\x2db265\x2d0fbc\x2d35995393f47f.mount: Deactivated successfully. Jan 23 01:04:46.663333 containerd[1567]: time="2026-01-23T01:04:46.663209278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pk4tl,Uid:77cee7a3-d314-42b2-8d1b-22ce21da8d56,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:46.697139 containerd[1567]: time="2026-01-23T01:04:46.697065188Z" level=error msg="Failed to destroy network for sandbox \"e84c4ee3d2584a9eab282532ba1c9757faa8c6ac1a4abf80b2647d2848a06311\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.705738 systemd[1]: run-netns-cni\x2ddaa9252e\x2d8fc8\x2d4045\x2d8bd5\x2deaa3c92f6e5d.mount: Deactivated successfully. Jan 23 01:04:46.707945 containerd[1567]: time="2026-01-23T01:04:46.707528401Z" level=error msg="Failed to destroy network for sandbox \"7d15d40973c1326f200b8f1aa4bcaf5afc3766c50fd5cddb5881cc4844abb21f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.713976 systemd[1]: run-netns-cni\x2d215c6d53\x2dfafb\x2d2154\x2d5934\x2d36152d0e11b8.mount: Deactivated successfully. Jan 23 01:04:46.721226 kubelet[2904]: E0123 01:04:46.721164 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d0d45c00ab648e17fc467107372d9035b5b97b4970414622d5ffb4a23d52cd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.721226 kubelet[2904]: E0123 01:04:46.721262 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d0d45c00ab648e17fc467107372d9035b5b97b4970414622d5ffb4a23d52cd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" Jan 23 01:04:46.721226 kubelet[2904]: E0123 01:04:46.721358 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d0d45c00ab648e17fc467107372d9035b5b97b4970414622d5ffb4a23d52cd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" Jan 23 01:04:46.723363 kubelet[2904]: E0123 01:04:46.721439 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dcc89fd94-gvlr2_calico-system(5ea72ad9-04e5-48e1-a1f3-bd44567b901e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dcc89fd94-gvlr2_calico-system(5ea72ad9-04e5-48e1-a1f3-bd44567b901e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d0d45c00ab648e17fc467107372d9035b5b97b4970414622d5ffb4a23d52cd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:04:46.759394 containerd[1567]: time="2026-01-23T01:04:46.758933274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p82rz,Uid:26ba25fb-0e8b-48e9-998f-30a0f733f697,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84c4ee3d2584a9eab282532ba1c9757faa8c6ac1a4abf80b2647d2848a06311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.761134 containerd[1567]: time="2026-01-23T01:04:46.761015132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-d54m6,Uid:3c57c36f-e9c4-4469-830b-86d51909b784,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d15d40973c1326f200b8f1aa4bcaf5afc3766c50fd5cddb5881cc4844abb21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.763350 kubelet[2904]: E0123 01:04:46.763129 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84c4ee3d2584a9eab282532ba1c9757faa8c6ac1a4abf80b2647d2848a06311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.763350 kubelet[2904]: E0123 01:04:46.763293 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84c4ee3d2584a9eab282532ba1c9757faa8c6ac1a4abf80b2647d2848a06311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p82rz" Jan 23 01:04:46.763350 kubelet[2904]: E0123 01:04:46.763339 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84c4ee3d2584a9eab282532ba1c9757faa8c6ac1a4abf80b2647d2848a06311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p82rz" Jan 23 01:04:46.763524 kubelet[2904]: E0123 01:04:46.763414 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p82rz_kube-system(26ba25fb-0e8b-48e9-998f-30a0f733f697)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p82rz_kube-system(26ba25fb-0e8b-48e9-998f-30a0f733f697)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e84c4ee3d2584a9eab282532ba1c9757faa8c6ac1a4abf80b2647d2848a06311\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p82rz" podUID="26ba25fb-0e8b-48e9-998f-30a0f733f697" Jan 23 01:04:46.768911 kubelet[2904]: E0123 01:04:46.763976 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d15d40973c1326f200b8f1aa4bcaf5afc3766c50fd5cddb5881cc4844abb21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.768911 kubelet[2904]: E0123 01:04:46.765082 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d15d40973c1326f200b8f1aa4bcaf5afc3766c50fd5cddb5881cc4844abb21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" Jan 23 01:04:46.768911 kubelet[2904]: E0123 01:04:46.765127 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d15d40973c1326f200b8f1aa4bcaf5afc3766c50fd5cddb5881cc4844abb21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" Jan 23 01:04:46.769158 kubelet[2904]: E0123 01:04:46.765196 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd579f464-d54m6_calico-apiserver(3c57c36f-e9c4-4469-830b-86d51909b784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd579f464-d54m6_calico-apiserver(3c57c36f-e9c4-4469-830b-86d51909b784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d15d40973c1326f200b8f1aa4bcaf5afc3766c50fd5cddb5881cc4844abb21f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:04:46.836118 containerd[1567]: time="2026-01-23T01:04:46.835974386Z" level=error msg="Failed to destroy network for sandbox \"ea88d150c0b2c0d8cf3e962f52856115a5defaccc0fbaaa6c11f997f0c160c05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.861257 containerd[1567]: time="2026-01-23T01:04:46.849611866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-thkld,Uid:29e4782e-15b5-4470-9ba4-cbf36b95c79b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea88d150c0b2c0d8cf3e962f52856115a5defaccc0fbaaa6c11f997f0c160c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.861548 kubelet[2904]: E0123 01:04:46.854132 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea88d150c0b2c0d8cf3e962f52856115a5defaccc0fbaaa6c11f997f0c160c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.861548 kubelet[2904]: E0123 01:04:46.854213 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea88d150c0b2c0d8cf3e962f52856115a5defaccc0fbaaa6c11f997f0c160c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-thkld" Jan 23 01:04:46.861548 kubelet[2904]: E0123 01:04:46.854247 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea88d150c0b2c0d8cf3e962f52856115a5defaccc0fbaaa6c11f997f0c160c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-thkld" Jan 23 01:04:46.862001 kubelet[2904]: E0123 01:04:46.854319 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-thkld_kube-system(29e4782e-15b5-4470-9ba4-cbf36b95c79b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-thkld_kube-system(29e4782e-15b5-4470-9ba4-cbf36b95c79b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea88d150c0b2c0d8cf3e962f52856115a5defaccc0fbaaa6c11f997f0c160c05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-thkld" podUID="29e4782e-15b5-4470-9ba4-cbf36b95c79b" Jan 23 01:04:46.871724 containerd[1567]: time="2026-01-23T01:04:46.864232128Z" level=error msg="Failed to destroy network for sandbox \"61d3120d55d7c0ef29218f732261174251a224291fa79752f6dcbaeedc6ffe6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.871724 containerd[1567]: time="2026-01-23T01:04:46.868379692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c554c8d6b-phpdd,Uid:baca7367-8f45-40cf-b782-1ff5b51a0c81,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d3120d55d7c0ef29218f732261174251a224291fa79752f6dcbaeedc6ffe6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.873077 kubelet[2904]: E0123 01:04:46.872173 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d3120d55d7c0ef29218f732261174251a224291fa79752f6dcbaeedc6ffe6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.873077 kubelet[2904]: E0123 01:04:46.872310 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d3120d55d7c0ef29218f732261174251a224291fa79752f6dcbaeedc6ffe6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c554c8d6b-phpdd" Jan 23 01:04:46.873077 kubelet[2904]: E0123 01:04:46.872344 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d3120d55d7c0ef29218f732261174251a224291fa79752f6dcbaeedc6ffe6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c554c8d6b-phpdd" Jan 23 01:04:46.873253 kubelet[2904]: E0123 01:04:46.872465 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c554c8d6b-phpdd_calico-system(baca7367-8f45-40cf-b782-1ff5b51a0c81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c554c8d6b-phpdd_calico-system(baca7367-8f45-40cf-b782-1ff5b51a0c81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61d3120d55d7c0ef29218f732261174251a224291fa79752f6dcbaeedc6ffe6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c554c8d6b-phpdd" podUID="baca7367-8f45-40cf-b782-1ff5b51a0c81" Jan 23 01:04:46.877146 containerd[1567]: time="2026-01-23T01:04:46.877104825Z" level=error msg="Failed to destroy network for sandbox \"f72424fae3596b0427604d9705ed940072f8013f27bbc6d184a29551372eccf6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.908982 containerd[1567]: time="2026-01-23T01:04:46.905260868Z" level=error msg="Failed to destroy network for sandbox \"a2b260935625ca59cdd6ed689d5709d2d326572424d30e773888ec0e38b353b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.908982 containerd[1567]: time="2026-01-23T01:04:46.908186287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5hvbp,Uid:4d54e261-de28-4a61-bcdc-0ebb829e113e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f72424fae3596b0427604d9705ed940072f8013f27bbc6d184a29551372eccf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.912084 kubelet[2904]: E0123 01:04:46.911609 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f72424fae3596b0427604d9705ed940072f8013f27bbc6d184a29551372eccf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.912084 kubelet[2904]: E0123 01:04:46.911977 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f72424fae3596b0427604d9705ed940072f8013f27bbc6d184a29551372eccf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5hvbp" Jan 23 01:04:46.912084 kubelet[2904]: E0123 01:04:46.912014 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f72424fae3596b0427604d9705ed940072f8013f27bbc6d184a29551372eccf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5hvbp" Jan 23 01:04:46.913356 kubelet[2904]: E0123 01:04:46.912079 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5hvbp_calico-system(4d54e261-de28-4a61-bcdc-0ebb829e113e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5hvbp_calico-system(4d54e261-de28-4a61-bcdc-0ebb829e113e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f72424fae3596b0427604d9705ed940072f8013f27bbc6d184a29551372eccf6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:04:46.913596 containerd[1567]: time="2026-01-23T01:04:46.913185114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-47gkr,Uid:aa11cfa4-c767-44e1-bc2c-24c685ae9875,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2b260935625ca59cdd6ed689d5709d2d326572424d30e773888ec0e38b353b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.913964 kubelet[2904]: E0123 01:04:46.913443 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2b260935625ca59cdd6ed689d5709d2d326572424d30e773888ec0e38b353b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:46.913964 kubelet[2904]: E0123 01:04:46.913495 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2b260935625ca59cdd6ed689d5709d2d326572424d30e773888ec0e38b353b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" Jan 23 01:04:46.913964 kubelet[2904]: E0123 01:04:46.913609 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2b260935625ca59cdd6ed689d5709d2d326572424d30e773888ec0e38b353b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" Jan 23 01:04:46.914419 kubelet[2904]: E0123 01:04:46.914167 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd579f464-47gkr_calico-apiserver(aa11cfa4-c767-44e1-bc2c-24c685ae9875)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd579f464-47gkr_calico-apiserver(aa11cfa4-c767-44e1-bc2c-24c685ae9875)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2b260935625ca59cdd6ed689d5709d2d326572424d30e773888ec0e38b353b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:04:47.077256 systemd[1]: run-netns-cni\x2d8e2960f9\x2da17a\x2d86fe\x2d5448\x2db929e8724848.mount: Deactivated successfully. Jan 23 01:04:47.077994 systemd[1]: run-netns-cni\x2d22c55153\x2de3e9\x2d956b\x2d0cba\x2de8079166b683.mount: Deactivated successfully. Jan 23 01:04:47.078129 systemd[1]: run-netns-cni\x2de2d5668a\x2db066\x2d3070\x2de633\x2d454f6064ad5c.mount: Deactivated successfully. Jan 23 01:04:47.078231 systemd[1]: run-netns-cni\x2d278f16bd\x2d8711\x2d329f\x2dc709\x2d308efd052ded.mount: Deactivated successfully. Jan 23 01:04:47.118105 containerd[1567]: time="2026-01-23T01:04:47.117148022Z" level=error msg="Failed to destroy network for sandbox \"d5568ebd706cee6b0a5780f06a3a62ac195d9a4f2a9e68419dbb359593119b4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:47.136008 containerd[1567]: time="2026-01-23T01:04:47.131209150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pk4tl,Uid:77cee7a3-d314-42b2-8d1b-22ce21da8d56,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5568ebd706cee6b0a5780f06a3a62ac195d9a4f2a9e68419dbb359593119b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:47.136277 kubelet[2904]: E0123 01:04:47.131614 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5568ebd706cee6b0a5780f06a3a62ac195d9a4f2a9e68419dbb359593119b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:47.136277 kubelet[2904]: E0123 01:04:47.131889 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5568ebd706cee6b0a5780f06a3a62ac195d9a4f2a9e68419dbb359593119b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pk4tl" Jan 23 01:04:47.136277 kubelet[2904]: E0123 01:04:47.131924 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5568ebd706cee6b0a5780f06a3a62ac195d9a4f2a9e68419dbb359593119b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pk4tl" Jan 23 01:04:47.133606 systemd[1]: run-netns-cni\x2dde2f1255\x2de090\x2d30c5\x2dbb0f\x2dfa0bb0c433c8.mount: Deactivated successfully. Jan 23 01:04:47.136494 kubelet[2904]: E0123 01:04:47.131985 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5568ebd706cee6b0a5780f06a3a62ac195d9a4f2a9e68419dbb359593119b4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:04:58.568003 kubelet[2904]: E0123 01:04:58.567734 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:58.570726 kubelet[2904]: E0123 01:04:58.570487 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:04:58.573600 containerd[1567]: time="2026-01-23T01:04:58.572593651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-thkld,Uid:29e4782e-15b5-4470-9ba4-cbf36b95c79b,Namespace:kube-system,Attempt:0,}" Jan 23 01:04:58.575927 containerd[1567]: time="2026-01-23T01:04:58.573243935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c554c8d6b-phpdd,Uid:baca7367-8f45-40cf-b782-1ff5b51a0c81,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:58.575927 containerd[1567]: time="2026-01-23T01:04:58.572593764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p82rz,Uid:26ba25fb-0e8b-48e9-998f-30a0f733f697,Namespace:kube-system,Attempt:0,}" Jan 23 01:04:58.601302 containerd[1567]: time="2026-01-23T01:04:58.599749652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-d54m6,Uid:3c57c36f-e9c4-4469-830b-86d51909b784,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:04:59.064004 containerd[1567]: time="2026-01-23T01:04:59.063951562Z" level=error msg="Failed to destroy network for sandbox \"7c745d429bd5a554618a1b4465258118ec388f1212496d62785df55df1bae98c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.071010 systemd[1]: run-netns-cni\x2d1fdfd644\x2da0d2\x2d46f9\x2d34bd\x2da3cae704f2c9.mount: Deactivated successfully. Jan 23 01:04:59.098737 containerd[1567]: time="2026-01-23T01:04:59.098552759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c554c8d6b-phpdd,Uid:baca7367-8f45-40cf-b782-1ff5b51a0c81,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c745d429bd5a554618a1b4465258118ec388f1212496d62785df55df1bae98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.099956 kubelet[2904]: E0123 01:04:59.099051 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c745d429bd5a554618a1b4465258118ec388f1212496d62785df55df1bae98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.099956 kubelet[2904]: E0123 01:04:59.099197 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c745d429bd5a554618a1b4465258118ec388f1212496d62785df55df1bae98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c554c8d6b-phpdd" Jan 23 01:04:59.099956 kubelet[2904]: E0123 01:04:59.099223 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c745d429bd5a554618a1b4465258118ec388f1212496d62785df55df1bae98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c554c8d6b-phpdd" Jan 23 01:04:59.100483 kubelet[2904]: E0123 01:04:59.099346 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c554c8d6b-phpdd_calico-system(baca7367-8f45-40cf-b782-1ff5b51a0c81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c554c8d6b-phpdd_calico-system(baca7367-8f45-40cf-b782-1ff5b51a0c81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c745d429bd5a554618a1b4465258118ec388f1212496d62785df55df1bae98c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c554c8d6b-phpdd" podUID="baca7367-8f45-40cf-b782-1ff5b51a0c81" Jan 23 01:04:59.123197 containerd[1567]: time="2026-01-23T01:04:59.122046457Z" level=error msg="Failed to destroy network for sandbox \"dc130b72435c08647c1bb3cea55735d45aa91588622cfaef6ecc25407c010e55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.126616 containerd[1567]: time="2026-01-23T01:04:59.125483481Z" level=error msg="Failed to destroy network for sandbox \"e10651878ce273e3a1dd75536e697dab153df25719b5e2eb97449cf6b789434c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.127586 containerd[1567]: time="2026-01-23T01:04:59.127447356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p82rz,Uid:26ba25fb-0e8b-48e9-998f-30a0f733f697,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc130b72435c08647c1bb3cea55735d45aa91588622cfaef6ecc25407c010e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.129934 kubelet[2904]: E0123 01:04:59.129443 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc130b72435c08647c1bb3cea55735d45aa91588622cfaef6ecc25407c010e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.129934 kubelet[2904]: E0123 01:04:59.129515 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc130b72435c08647c1bb3cea55735d45aa91588622cfaef6ecc25407c010e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p82rz" Jan 23 01:04:59.129934 kubelet[2904]: E0123 01:04:59.129537 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc130b72435c08647c1bb3cea55735d45aa91588622cfaef6ecc25407c010e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p82rz" Jan 23 01:04:59.130295 kubelet[2904]: E0123 01:04:59.129599 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p82rz_kube-system(26ba25fb-0e8b-48e9-998f-30a0f733f697)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p82rz_kube-system(26ba25fb-0e8b-48e9-998f-30a0f733f697)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc130b72435c08647c1bb3cea55735d45aa91588622cfaef6ecc25407c010e55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p82rz" podUID="26ba25fb-0e8b-48e9-998f-30a0f733f697" Jan 23 01:04:59.137208 containerd[1567]: time="2026-01-23T01:04:59.135932500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-thkld,Uid:29e4782e-15b5-4470-9ba4-cbf36b95c79b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e10651878ce273e3a1dd75536e697dab153df25719b5e2eb97449cf6b789434c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.137448 kubelet[2904]: E0123 01:04:59.136505 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e10651878ce273e3a1dd75536e697dab153df25719b5e2eb97449cf6b789434c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.137448 kubelet[2904]: E0123 01:04:59.136570 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e10651878ce273e3a1dd75536e697dab153df25719b5e2eb97449cf6b789434c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-thkld" Jan 23 01:04:59.137448 kubelet[2904]: E0123 01:04:59.136603 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e10651878ce273e3a1dd75536e697dab153df25719b5e2eb97449cf6b789434c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-thkld" Jan 23 01:04:59.137592 kubelet[2904]: E0123 01:04:59.136662 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-thkld_kube-system(29e4782e-15b5-4470-9ba4-cbf36b95c79b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-thkld_kube-system(29e4782e-15b5-4470-9ba4-cbf36b95c79b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e10651878ce273e3a1dd75536e697dab153df25719b5e2eb97449cf6b789434c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-thkld" podUID="29e4782e-15b5-4470-9ba4-cbf36b95c79b" Jan 23 01:04:59.194315 containerd[1567]: time="2026-01-23T01:04:59.194168338Z" level=error msg="Failed to destroy network for sandbox \"9fbd0d92bb7bd15cc22ba5721c6f10c88152b1c7efee176ee3b33279e22cb359\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.200707 containerd[1567]: time="2026-01-23T01:04:59.200282806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-d54m6,Uid:3c57c36f-e9c4-4469-830b-86d51909b784,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fbd0d92bb7bd15cc22ba5721c6f10c88152b1c7efee176ee3b33279e22cb359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.202019 kubelet[2904]: E0123 01:04:59.201585 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fbd0d92bb7bd15cc22ba5721c6f10c88152b1c7efee176ee3b33279e22cb359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.202019 kubelet[2904]: E0123 01:04:59.201720 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fbd0d92bb7bd15cc22ba5721c6f10c88152b1c7efee176ee3b33279e22cb359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" Jan 23 01:04:59.202019 kubelet[2904]: E0123 01:04:59.201745 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fbd0d92bb7bd15cc22ba5721c6f10c88152b1c7efee176ee3b33279e22cb359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" Jan 23 01:04:59.203630 kubelet[2904]: E0123 01:04:59.203428 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd579f464-d54m6_calico-apiserver(3c57c36f-e9c4-4469-830b-86d51909b784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd579f464-d54m6_calico-apiserver(3c57c36f-e9c4-4469-830b-86d51909b784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fbd0d92bb7bd15cc22ba5721c6f10c88152b1c7efee176ee3b33279e22cb359\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:04:59.565414 containerd[1567]: time="2026-01-23T01:04:59.565351568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pk4tl,Uid:77cee7a3-d314-42b2-8d1b-22ce21da8d56,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:59.569628 containerd[1567]: time="2026-01-23T01:04:59.566243786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-47gkr,Uid:aa11cfa4-c767-44e1-bc2c-24c685ae9875,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:04:59.569937 containerd[1567]: time="2026-01-23T01:04:59.566693202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcc89fd94-gvlr2,Uid:5ea72ad9-04e5-48e1-a1f3-bd44567b901e,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:59.611235 systemd[1]: run-netns-cni\x2d2a3861ca\x2d3ef9\x2de2d0\x2d78c8\x2d1a0b97e7bd71.mount: Deactivated successfully. Jan 23 01:04:59.612035 systemd[1]: run-netns-cni\x2d0513d773\x2dba56\x2d7aee\x2d9124\x2d81789d06dc20.mount: Deactivated successfully. Jan 23 01:04:59.612237 systemd[1]: run-netns-cni\x2dc1620c12\x2db36c\x2da53c\x2d479c\x2d0e6427d686af.mount: Deactivated successfully. Jan 23 01:04:59.918924 containerd[1567]: time="2026-01-23T01:04:59.917546521Z" level=error msg="Failed to destroy network for sandbox \"23e830f0c7635359194616aa4ba406b3b13a997322c94206af91f27c58987df8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.932631 systemd[1]: run-netns-cni\x2de17a438c\x2d0e67\x2db1de\x2db301\x2d2ff9cf0fae53.mount: Deactivated successfully. Jan 23 01:04:59.937674 containerd[1567]: time="2026-01-23T01:04:59.937552068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcc89fd94-gvlr2,Uid:5ea72ad9-04e5-48e1-a1f3-bd44567b901e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e830f0c7635359194616aa4ba406b3b13a997322c94206af91f27c58987df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.948936 kubelet[2904]: E0123 01:04:59.948561 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e830f0c7635359194616aa4ba406b3b13a997322c94206af91f27c58987df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.951295 kubelet[2904]: E0123 01:04:59.950540 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e830f0c7635359194616aa4ba406b3b13a997322c94206af91f27c58987df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" Jan 23 01:04:59.951295 kubelet[2904]: E0123 01:04:59.950591 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e830f0c7635359194616aa4ba406b3b13a997322c94206af91f27c58987df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" Jan 23 01:04:59.951295 kubelet[2904]: E0123 01:04:59.950668 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dcc89fd94-gvlr2_calico-system(5ea72ad9-04e5-48e1-a1f3-bd44567b901e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dcc89fd94-gvlr2_calico-system(5ea72ad9-04e5-48e1-a1f3-bd44567b901e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23e830f0c7635359194616aa4ba406b3b13a997322c94206af91f27c58987df8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:04:59.970983 containerd[1567]: time="2026-01-23T01:04:59.970708331Z" level=error msg="Failed to destroy network for sandbox \"a129ef304999d872b4c7086fe9fd11ca2d70bb02922e94967a5101d210253dc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.986955 containerd[1567]: time="2026-01-23T01:04:59.979066668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pk4tl,Uid:77cee7a3-d314-42b2-8d1b-22ce21da8d56,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a129ef304999d872b4c7086fe9fd11ca2d70bb02922e94967a5101d210253dc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.987337 kubelet[2904]: E0123 01:04:59.983631 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a129ef304999d872b4c7086fe9fd11ca2d70bb02922e94967a5101d210253dc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:59.987337 kubelet[2904]: E0123 01:04:59.983716 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a129ef304999d872b4c7086fe9fd11ca2d70bb02922e94967a5101d210253dc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pk4tl" Jan 23 01:04:59.987337 kubelet[2904]: E0123 01:04:59.983749 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a129ef304999d872b4c7086fe9fd11ca2d70bb02922e94967a5101d210253dc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pk4tl" Jan 23 01:04:59.990753 kubelet[2904]: E0123 01:04:59.990458 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a129ef304999d872b4c7086fe9fd11ca2d70bb02922e94967a5101d210253dc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:05:00.016991 systemd[1]: run-netns-cni\x2df085cbe9\x2da875\x2d2986\x2de00e\x2dbafae42d1abf.mount: Deactivated successfully. Jan 23 01:05:00.080626 containerd[1567]: time="2026-01-23T01:05:00.080568269Z" level=error msg="Failed to destroy network for sandbox \"a846dc351a83838d9e1f9b93fb48bb184d3c8b6cf40c1e6c8793d0dcd023377b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:00.112695 systemd[1]: run-netns-cni\x2de40da65a\x2d1d8c\x2de502\x2de14d\x2dd6f4be95d279.mount: Deactivated successfully. Jan 23 01:05:00.121966 containerd[1567]: time="2026-01-23T01:05:00.120720762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-47gkr,Uid:aa11cfa4-c767-44e1-bc2c-24c685ae9875,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a846dc351a83838d9e1f9b93fb48bb184d3c8b6cf40c1e6c8793d0dcd023377b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:00.122311 kubelet[2904]: E0123 01:05:00.121561 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a846dc351a83838d9e1f9b93fb48bb184d3c8b6cf40c1e6c8793d0dcd023377b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:00.122311 kubelet[2904]: E0123 01:05:00.121651 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a846dc351a83838d9e1f9b93fb48bb184d3c8b6cf40c1e6c8793d0dcd023377b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" Jan 23 01:05:00.122311 kubelet[2904]: E0123 01:05:00.121691 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a846dc351a83838d9e1f9b93fb48bb184d3c8b6cf40c1e6c8793d0dcd023377b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" Jan 23 01:05:00.122459 kubelet[2904]: E0123 01:05:00.121919 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd579f464-47gkr_calico-apiserver(aa11cfa4-c767-44e1-bc2c-24c685ae9875)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd579f464-47gkr_calico-apiserver(aa11cfa4-c767-44e1-bc2c-24c685ae9875)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a846dc351a83838d9e1f9b93fb48bb184d3c8b6cf40c1e6c8793d0dcd023377b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:05:01.572980 containerd[1567]: time="2026-01-23T01:05:01.572702938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5hvbp,Uid:4d54e261-de28-4a61-bcdc-0ebb829e113e,Namespace:calico-system,Attempt:0,}" Jan 23 01:05:01.956576 containerd[1567]: time="2026-01-23T01:05:01.944315876Z" level=error msg="Failed to destroy network for sandbox \"d2a56704c456be611b12b1062b6feb447307da4be016ec772403204bc78570cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:01.953028 systemd[1]: run-netns-cni\x2df7bb902c\x2df831\x2d41d8\x2dced5\x2dd0e4df3b8dd3.mount: Deactivated successfully. Jan 23 01:05:02.018303 containerd[1567]: time="2026-01-23T01:05:02.014467734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5hvbp,Uid:4d54e261-de28-4a61-bcdc-0ebb829e113e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2a56704c456be611b12b1062b6feb447307da4be016ec772403204bc78570cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:02.018615 kubelet[2904]: E0123 01:05:02.016088 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2a56704c456be611b12b1062b6feb447307da4be016ec772403204bc78570cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:02.018615 kubelet[2904]: E0123 01:05:02.016164 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2a56704c456be611b12b1062b6feb447307da4be016ec772403204bc78570cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5hvbp" Jan 23 01:05:02.018615 kubelet[2904]: E0123 01:05:02.016194 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2a56704c456be611b12b1062b6feb447307da4be016ec772403204bc78570cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5hvbp" Jan 23 01:05:02.019744 kubelet[2904]: E0123 01:05:02.016353 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5hvbp_calico-system(4d54e261-de28-4a61-bcdc-0ebb829e113e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5hvbp_calico-system(4d54e261-de28-4a61-bcdc-0ebb829e113e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2a56704c456be611b12b1062b6feb447307da4be016ec772403204bc78570cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:05:10.565988 containerd[1567]: time="2026-01-23T01:05:10.565478663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-d54m6,Uid:3c57c36f-e9c4-4469-830b-86d51909b784,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:05:10.829153 containerd[1567]: time="2026-01-23T01:05:10.828920350Z" level=error msg="Failed to destroy network for sandbox \"b24fac5a6dcbb7e08834a6ac2b490691b2d460e6fe7231e0f4ab4e79385d05fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:10.833709 containerd[1567]: time="2026-01-23T01:05:10.833488466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-d54m6,Uid:3c57c36f-e9c4-4469-830b-86d51909b784,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24fac5a6dcbb7e08834a6ac2b490691b2d460e6fe7231e0f4ab4e79385d05fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:10.835315 kubelet[2904]: E0123 01:05:10.834972 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24fac5a6dcbb7e08834a6ac2b490691b2d460e6fe7231e0f4ab4e79385d05fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:10.836444 kubelet[2904]: E0123 01:05:10.836151 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24fac5a6dcbb7e08834a6ac2b490691b2d460e6fe7231e0f4ab4e79385d05fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" Jan 23 01:05:10.836444 kubelet[2904]: E0123 01:05:10.836270 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24fac5a6dcbb7e08834a6ac2b490691b2d460e6fe7231e0f4ab4e79385d05fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" Jan 23 01:05:10.837375 kubelet[2904]: E0123 01:05:10.836985 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd579f464-d54m6_calico-apiserver(3c57c36f-e9c4-4469-830b-86d51909b784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd579f464-d54m6_calico-apiserver(3c57c36f-e9c4-4469-830b-86d51909b784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b24fac5a6dcbb7e08834a6ac2b490691b2d460e6fe7231e0f4ab4e79385d05fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:05:10.841439 systemd[1]: run-netns-cni\x2da7b67c34\x2d2ac9\x2db670\x2d26a6\x2d80a426aef466.mount: Deactivated successfully. Jan 23 01:05:11.569999 kubelet[2904]: E0123 01:05:11.569537 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:11.571546 containerd[1567]: time="2026-01-23T01:05:11.571319454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p82rz,Uid:26ba25fb-0e8b-48e9-998f-30a0f733f697,Namespace:kube-system,Attempt:0,}" Jan 23 01:05:11.770113 containerd[1567]: time="2026-01-23T01:05:11.769969356Z" level=error msg="Failed to destroy network for sandbox \"2db8dcfeb6f6ada6250848937cee1569f1c67eba1610dad595e9df926018c89c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:11.775567 systemd[1]: run-netns-cni\x2d678665bc\x2d0551\x2d191c\x2dc396\x2da0e5f9289e30.mount: Deactivated successfully. Jan 23 01:05:11.779177 containerd[1567]: time="2026-01-23T01:05:11.779078214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p82rz,Uid:26ba25fb-0e8b-48e9-998f-30a0f733f697,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db8dcfeb6f6ada6250848937cee1569f1c67eba1610dad595e9df926018c89c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:11.780240 kubelet[2904]: E0123 01:05:11.779485 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db8dcfeb6f6ada6250848937cee1569f1c67eba1610dad595e9df926018c89c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:11.780240 kubelet[2904]: E0123 01:05:11.780136 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db8dcfeb6f6ada6250848937cee1569f1c67eba1610dad595e9df926018c89c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p82rz" Jan 23 01:05:11.780240 kubelet[2904]: E0123 01:05:11.780169 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db8dcfeb6f6ada6250848937cee1569f1c67eba1610dad595e9df926018c89c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p82rz" Jan 23 01:05:11.782686 kubelet[2904]: E0123 01:05:11.782340 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p82rz_kube-system(26ba25fb-0e8b-48e9-998f-30a0f733f697)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p82rz_kube-system(26ba25fb-0e8b-48e9-998f-30a0f733f697)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2db8dcfeb6f6ada6250848937cee1569f1c67eba1610dad595e9df926018c89c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p82rz" podUID="26ba25fb-0e8b-48e9-998f-30a0f733f697" Jan 23 01:05:12.564477 containerd[1567]: time="2026-01-23T01:05:12.564432683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-47gkr,Uid:aa11cfa4-c767-44e1-bc2c-24c685ae9875,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:05:12.882101 containerd[1567]: time="2026-01-23T01:05:12.881401763Z" level=error msg="Failed to destroy network for sandbox \"a725e60964c28e8b30317d6ca0dc43efa22417b27367b59d919122c8662c9dac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:12.886381 systemd[1]: run-netns-cni\x2d1003cc97\x2dfea4\x2d72b9\x2db609\x2de573124af451.mount: Deactivated successfully. Jan 23 01:05:12.890034 containerd[1567]: time="2026-01-23T01:05:12.889210664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-47gkr,Uid:aa11cfa4-c767-44e1-bc2c-24c685ae9875,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a725e60964c28e8b30317d6ca0dc43efa22417b27367b59d919122c8662c9dac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:12.890296 kubelet[2904]: E0123 01:05:12.890033 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a725e60964c28e8b30317d6ca0dc43efa22417b27367b59d919122c8662c9dac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:12.890296 kubelet[2904]: E0123 01:05:12.890112 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a725e60964c28e8b30317d6ca0dc43efa22417b27367b59d919122c8662c9dac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" Jan 23 01:05:12.890296 kubelet[2904]: E0123 01:05:12.890144 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a725e60964c28e8b30317d6ca0dc43efa22417b27367b59d919122c8662c9dac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" Jan 23 01:05:12.892164 kubelet[2904]: E0123 01:05:12.890215 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd579f464-47gkr_calico-apiserver(aa11cfa4-c767-44e1-bc2c-24c685ae9875)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd579f464-47gkr_calico-apiserver(aa11cfa4-c767-44e1-bc2c-24c685ae9875)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a725e60964c28e8b30317d6ca0dc43efa22417b27367b59d919122c8662c9dac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:05:13.184395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424602307.mount: Deactivated successfully. Jan 23 01:05:13.241462 containerd[1567]: time="2026-01-23T01:05:13.240275175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:05:13.246555 containerd[1567]: time="2026-01-23T01:05:13.245113756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 01:05:13.249176 containerd[1567]: time="2026-01-23T01:05:13.248937356Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:05:13.255042 containerd[1567]: time="2026-01-23T01:05:13.254276801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:05:13.255145 containerd[1567]: time="2026-01-23T01:05:13.255057464Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 27.324638674s" Jan 23 01:05:13.255341 containerd[1567]: time="2026-01-23T01:05:13.255101819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 01:05:13.320862 containerd[1567]: time="2026-01-23T01:05:13.320609199Z" level=info msg="CreateContainer within sandbox \"391281fb638276c5992fd22ab62d15a77986716f45f62cc154de5a194ea79a23\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 01:05:13.367885 containerd[1567]: time="2026-01-23T01:05:13.365476778Z" level=info msg="Container c6800722e40314829f3134fd182dbc198c10472da5beeb7e4aefc86ed3bf3f36: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:05:13.418486 containerd[1567]: time="2026-01-23T01:05:13.418149690Z" level=info msg="CreateContainer within sandbox \"391281fb638276c5992fd22ab62d15a77986716f45f62cc154de5a194ea79a23\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c6800722e40314829f3134fd182dbc198c10472da5beeb7e4aefc86ed3bf3f36\"" Jan 23 01:05:13.425586 containerd[1567]: time="2026-01-23T01:05:13.420429179Z" level=info msg="StartContainer for \"c6800722e40314829f3134fd182dbc198c10472da5beeb7e4aefc86ed3bf3f36\"" Jan 23 01:05:13.425586 containerd[1567]: time="2026-01-23T01:05:13.422044063Z" level=info msg="connecting to shim c6800722e40314829f3134fd182dbc198c10472da5beeb7e4aefc86ed3bf3f36" address="unix:///run/containerd/s/4af3a0aaf1553df02ec19a158bc55ea4b6b641df643b460d93e1b4a2f8d7b125" protocol=ttrpc version=3 Jan 23 01:05:13.530351 systemd[1]: Started cri-containerd-c6800722e40314829f3134fd182dbc198c10472da5beeb7e4aefc86ed3bf3f36.scope - libcontainer container c6800722e40314829f3134fd182dbc198c10472da5beeb7e4aefc86ed3bf3f36. Jan 23 01:05:13.571475 containerd[1567]: time="2026-01-23T01:05:13.571151913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pk4tl,Uid:77cee7a3-d314-42b2-8d1b-22ce21da8d56,Namespace:calico-system,Attempt:0,}" Jan 23 01:05:13.572495 containerd[1567]: time="2026-01-23T01:05:13.572235292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcc89fd94-gvlr2,Uid:5ea72ad9-04e5-48e1-a1f3-bd44567b901e,Namespace:calico-system,Attempt:0,}" Jan 23 01:05:13.857551 containerd[1567]: time="2026-01-23T01:05:13.856959590Z" level=error msg="Failed to destroy network for sandbox \"0770366861e4b78d60382b50a64d88036cb90a363b5f95283a09650554c03d29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:13.863381 containerd[1567]: time="2026-01-23T01:05:13.863294565Z" level=info msg="StartContainer for \"c6800722e40314829f3134fd182dbc198c10472da5beeb7e4aefc86ed3bf3f36\" returns successfully" Jan 23 01:05:13.870336 containerd[1567]: time="2026-01-23T01:05:13.870286167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcc89fd94-gvlr2,Uid:5ea72ad9-04e5-48e1-a1f3-bd44567b901e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0770366861e4b78d60382b50a64d88036cb90a363b5f95283a09650554c03d29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:13.872930 kubelet[2904]: E0123 01:05:13.872516 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0770366861e4b78d60382b50a64d88036cb90a363b5f95283a09650554c03d29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:13.872930 kubelet[2904]: E0123 01:05:13.872590 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0770366861e4b78d60382b50a64d88036cb90a363b5f95283a09650554c03d29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" Jan 23 01:05:13.872930 kubelet[2904]: E0123 01:05:13.872621 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0770366861e4b78d60382b50a64d88036cb90a363b5f95283a09650554c03d29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" Jan 23 01:05:13.873607 kubelet[2904]: E0123 01:05:13.872681 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dcc89fd94-gvlr2_calico-system(5ea72ad9-04e5-48e1-a1f3-bd44567b901e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dcc89fd94-gvlr2_calico-system(5ea72ad9-04e5-48e1-a1f3-bd44567b901e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0770366861e4b78d60382b50a64d88036cb90a363b5f95283a09650554c03d29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:05:13.908257 containerd[1567]: time="2026-01-23T01:05:13.907594082Z" level=error msg="Failed to destroy network for sandbox \"51ea3d436e288bc94b53c5c4f890e7580d3b8e99c15e8cc48156ae03bd941cfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:13.921214 containerd[1567]: time="2026-01-23T01:05:13.921059814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pk4tl,Uid:77cee7a3-d314-42b2-8d1b-22ce21da8d56,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51ea3d436e288bc94b53c5c4f890e7580d3b8e99c15e8cc48156ae03bd941cfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:13.924243 kubelet[2904]: E0123 01:05:13.924191 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51ea3d436e288bc94b53c5c4f890e7580d3b8e99c15e8cc48156ae03bd941cfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:05:13.924298 systemd[1]: run-netns-cni\x2d1e6cb5b6\x2d122b\x2d8e06\x2da0f6\x2d2638cedbd1e1.mount: Deactivated successfully. Jan 23 01:05:13.926489 kubelet[2904]: E0123 01:05:13.925610 2904 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51ea3d436e288bc94b53c5c4f890e7580d3b8e99c15e8cc48156ae03bd941cfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pk4tl" Jan 23 01:05:13.926489 kubelet[2904]: E0123 01:05:13.925649 2904 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51ea3d436e288bc94b53c5c4f890e7580d3b8e99c15e8cc48156ae03bd941cfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pk4tl" Jan 23 01:05:13.926489 kubelet[2904]: E0123 01:05:13.926098 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51ea3d436e288bc94b53c5c4f890e7580d3b8e99c15e8cc48156ae03bd941cfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:05:14.229463 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 01:05:14.232151 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 01:05:14.318030 kubelet[2904]: E0123 01:05:14.317648 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:14.386187 kubelet[2904]: I0123 01:05:14.380255 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q9nkf" podStartSLOduration=3.307335875 podStartE2EDuration="52.380235872s" podCreationTimestamp="2026-01-23 01:04:22 +0000 UTC" firstStartedPulling="2026-01-23 01:04:24.184079893 +0000 UTC m=+87.529576563" lastFinishedPulling="2026-01-23 01:05:13.2569799 +0000 UTC m=+136.602476560" observedRunningTime="2026-01-23 01:05:14.373405009 +0000 UTC m=+137.718901679" watchObservedRunningTime="2026-01-23 01:05:14.380235872 +0000 UTC m=+137.725732513" Jan 23 01:05:14.592176 kubelet[2904]: E0123 01:05:14.585621 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:14.606949 containerd[1567]: time="2026-01-23T01:05:14.585684925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c554c8d6b-phpdd,Uid:baca7367-8f45-40cf-b782-1ff5b51a0c81,Namespace:calico-system,Attempt:0,}" Jan 23 01:05:14.606949 containerd[1567]: time="2026-01-23T01:05:14.600155046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-thkld,Uid:29e4782e-15b5-4470-9ba4-cbf36b95c79b,Namespace:kube-system,Attempt:0,}" Jan 23 01:05:15.343509 kubelet[2904]: E0123 01:05:15.342752 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:17.033748 systemd-networkd[1485]: cali929e9a905ad: Link UP Jan 23 01:05:17.035725 systemd-networkd[1485]: cali929e9a905ad: Gained carrier Jan 23 01:05:17.129070 containerd[1567]: 2026-01-23 01:05:15.409 [INFO][4633] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:05:17.129070 containerd[1567]: 2026-01-23 01:05:15.530 [INFO][4633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--thkld-eth0 coredns-674b8bbfcf- kube-system 29e4782e-15b5-4470-9ba4-cbf36b95c79b 1041 0 2026-01-23 01:03:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-thkld eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali929e9a905ad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" Namespace="kube-system" Pod="coredns-674b8bbfcf-thkld" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--thkld-" Jan 23 01:05:17.129070 containerd[1567]: 2026-01-23 01:05:15.533 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" Namespace="kube-system" Pod="coredns-674b8bbfcf-thkld" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--thkld-eth0" Jan 23 01:05:17.129070 containerd[1567]: 2026-01-23 01:05:16.234 [INFO][4687] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" HandleID="k8s-pod-network.ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" Workload="localhost-k8s-coredns--674b8bbfcf--thkld-eth0" Jan 23 01:05:17.130320 containerd[1567]: 2026-01-23 01:05:16.238 [INFO][4687] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" HandleID="k8s-pod-network.ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" Workload="localhost-k8s-coredns--674b8bbfcf--thkld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e07c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-thkld", "timestamp":"2026-01-23 01:05:16.23469023 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:05:17.130320 containerd[1567]: 2026-01-23 01:05:16.238 [INFO][4687] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:05:17.130320 containerd[1567]: 2026-01-23 01:05:16.238 [INFO][4687] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:05:17.130320 containerd[1567]: 2026-01-23 01:05:16.240 [INFO][4687] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:05:17.130320 containerd[1567]: 2026-01-23 01:05:16.373 [INFO][4687] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" host="localhost" Jan 23 01:05:17.130320 containerd[1567]: 2026-01-23 01:05:16.450 [INFO][4687] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:05:17.130320 containerd[1567]: 2026-01-23 01:05:16.495 [INFO][4687] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:05:17.130320 containerd[1567]: 2026-01-23 01:05:16.534 [INFO][4687] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:17.130320 containerd[1567]: 2026-01-23 01:05:16.551 [INFO][4687] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:17.130320 containerd[1567]: 2026-01-23 01:05:16.552 [INFO][4687] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" host="localhost" Jan 23 01:05:17.132501 containerd[1567]: 2026-01-23 01:05:16.590 [INFO][4687] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b Jan 23 01:05:17.132501 containerd[1567]: 2026-01-23 01:05:16.657 [INFO][4687] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" host="localhost" Jan 23 01:05:17.132732 containerd[1567]: 2026-01-23 01:05:16.694 [ERROR][4687] ipam/customresource.go 184: Error updating resource Key=IPAMBlock(192-168-88-128-26) Name="192-168-88-128-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-88-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.88.128/26", Affinity:(*string)(0xc0003e17c0), Allocations:[]*int{(*int)(0xc000185338), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003e07c0), AttrSecondary:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-thkld", "timestamp":"2026-01-23 01:05:16.23469023 +0000 UTC"}}}, SequenceNumber:0x188d36a6490e0b9c, SequenceNumberForAllocation:map[string]uint64{"0":0x188d36a6490e0b9b}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-88-128-26": the object has been modified; please apply your changes to the latest version and try again Jan 23 01:05:17.132732 containerd[1567]: 2026-01-23 01:05:16.698 [INFO][4687] ipam/ipam.go 1250: Failed to update block block=192.168.88.128/26 error=update conflict: IPAMBlock(192-168-88-128-26) handle="k8s-pod-network.ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" host="localhost" Jan 23 01:05:17.132732 containerd[1567]: 2026-01-23 01:05:16.832 [INFO][4687] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" host="localhost" Jan 23 01:05:17.132732 containerd[1567]: 2026-01-23 01:05:16.839 [INFO][4687] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b Jan 23 01:05:17.132732 containerd[1567]: 2026-01-23 01:05:16.855 [INFO][4687] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" host="localhost" Jan 23 01:05:17.132732 containerd[1567]: 2026-01-23 01:05:16.911 [INFO][4687] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" host="localhost" Jan 23 01:05:17.132732 containerd[1567]: 2026-01-23 01:05:16.911 [INFO][4687] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" host="localhost" Jan 23 01:05:17.132732 containerd[1567]: 2026-01-23 01:05:16.912 [INFO][4687] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:05:17.132732 containerd[1567]: 2026-01-23 01:05:16.912 [INFO][4687] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" HandleID="k8s-pod-network.ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" Workload="localhost-k8s-coredns--674b8bbfcf--thkld-eth0" Jan 23 01:05:17.133724 containerd[1567]: 2026-01-23 01:05:16.938 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" Namespace="kube-system" Pod="coredns-674b8bbfcf-thkld" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--thkld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--thkld-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"29e4782e-15b5-4470-9ba4-cbf36b95c79b", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 3, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-thkld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali929e9a905ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:17.133724 containerd[1567]: 2026-01-23 01:05:16.938 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" Namespace="kube-system" Pod="coredns-674b8bbfcf-thkld" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--thkld-eth0" Jan 23 01:05:17.133724 containerd[1567]: 2026-01-23 01:05:16.938 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali929e9a905ad ContainerID="ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" Namespace="kube-system" Pod="coredns-674b8bbfcf-thkld" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--thkld-eth0" Jan 23 01:05:17.133724 containerd[1567]: 2026-01-23 01:05:17.061 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" Namespace="kube-system" Pod="coredns-674b8bbfcf-thkld" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--thkld-eth0" Jan 23 01:05:17.133724 containerd[1567]: 2026-01-23 01:05:17.064 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" Namespace="kube-system" Pod="coredns-674b8bbfcf-thkld" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--thkld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--thkld-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"29e4782e-15b5-4470-9ba4-cbf36b95c79b", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 3, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b", Pod:"coredns-674b8bbfcf-thkld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali929e9a905ad", MAC:"da:26:0f:20:ca:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:17.136676 containerd[1567]: 2026-01-23 01:05:17.114 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" Namespace="kube-system" Pod="coredns-674b8bbfcf-thkld" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--thkld-eth0" Jan 23 01:05:17.199098 systemd-networkd[1485]: cali7e95d1d9506: Link UP Jan 23 01:05:17.201130 systemd-networkd[1485]: cali7e95d1d9506: Gained carrier Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:15.238 [INFO][4630] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:15.497 [INFO][4630] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6c554c8d6b--phpdd-eth0 whisker-6c554c8d6b- calico-system baca7367-8f45-40cf-b782-1ff5b51a0c81 1146 0 2026-01-23 01:04:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c554c8d6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6c554c8d6b-phpdd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7e95d1d9506 [] [] }} ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Namespace="calico-system" Pod="whisker-6c554c8d6b-phpdd" WorkloadEndpoint="localhost-k8s-whisker--6c554c8d6b--phpdd-" Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:15.498 [INFO][4630] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Namespace="calico-system" Pod="whisker-6c554c8d6b-phpdd" WorkloadEndpoint="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:16.233 [INFO][4685] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" HandleID="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Workload="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:16.237 [INFO][4685] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" HandleID="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Workload="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6c554c8d6b-phpdd", "timestamp":"2026-01-23 01:05:16.233602462 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:16.237 [INFO][4685] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:16.915 [INFO][4685] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:16.915 [INFO][4685] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:16.966 [INFO][4685] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" host="localhost" Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:17.007 [INFO][4685] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:17.061 [INFO][4685] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:17.079 [INFO][4685] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:17.099 [INFO][4685] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:17.110 [INFO][4685] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" host="localhost" Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:17.123 [INFO][4685] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66 Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:17.143 [INFO][4685] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" host="localhost" Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:17.166 [INFO][4685] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" host="localhost" Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:17.166 [INFO][4685] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" host="localhost" Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:17.167 [INFO][4685] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:05:17.252441 containerd[1567]: 2026-01-23 01:05:17.167 [INFO][4685] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" HandleID="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Workload="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:17.254084 containerd[1567]: 2026-01-23 01:05:17.173 [INFO][4630] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Namespace="calico-system" Pod="whisker-6c554c8d6b-phpdd" WorkloadEndpoint="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c554c8d6b--phpdd-eth0", GenerateName:"whisker-6c554c8d6b-", Namespace:"calico-system", SelfLink:"", UID:"baca7367-8f45-40cf-b782-1ff5b51a0c81", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c554c8d6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6c554c8d6b-phpdd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7e95d1d9506", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:17.254084 containerd[1567]: 2026-01-23 01:05:17.175 [INFO][4630] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Namespace="calico-system" Pod="whisker-6c554c8d6b-phpdd" WorkloadEndpoint="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:17.254084 containerd[1567]: 2026-01-23 01:05:17.175 [INFO][4630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e95d1d9506 ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Namespace="calico-system" Pod="whisker-6c554c8d6b-phpdd" WorkloadEndpoint="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:17.254084 containerd[1567]: 2026-01-23 01:05:17.204 [INFO][4630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Namespace="calico-system" Pod="whisker-6c554c8d6b-phpdd" WorkloadEndpoint="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:17.254084 containerd[1567]: 2026-01-23 01:05:17.206 [INFO][4630] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Namespace="calico-system" Pod="whisker-6c554c8d6b-phpdd" WorkloadEndpoint="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c554c8d6b--phpdd-eth0", GenerateName:"whisker-6c554c8d6b-", Namespace:"calico-system", SelfLink:"", UID:"baca7367-8f45-40cf-b782-1ff5b51a0c81", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c554c8d6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66", Pod:"whisker-6c554c8d6b-phpdd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7e95d1d9506", MAC:"b6:b2:2d:50:ea:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:17.254084 containerd[1567]: 2026-01-23 01:05:17.245 [INFO][4630] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Namespace="calico-system" Pod="whisker-6c554c8d6b-phpdd" WorkloadEndpoint="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:17.330140 containerd[1567]: time="2026-01-23T01:05:17.329371638Z" level=info msg="connecting to shim ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b" address="unix:///run/containerd/s/08a3889a73c99011ab3fde97a9ed87a49682ea2dd4c4007cd3ff6c95d8b1e6f7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:05:17.411253 containerd[1567]: time="2026-01-23T01:05:17.410984431Z" level=info msg="connecting to shim 2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" address="unix:///run/containerd/s/2d2d919b5bc370b1cfe7fa903aa789232903046cdec462c4a5df09ad321e8d85" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:05:17.471079 systemd[1]: Started cri-containerd-ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b.scope - libcontainer container ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b. Jan 23 01:05:17.495379 systemd[1]: Started cri-containerd-2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66.scope - libcontainer container 2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66. Jan 23 01:05:17.531216 systemd-resolved[1397]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:05:17.549549 systemd-resolved[1397]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:05:17.568458 containerd[1567]: time="2026-01-23T01:05:17.568245722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5hvbp,Uid:4d54e261-de28-4a61-bcdc-0ebb829e113e,Namespace:calico-system,Attempt:0,}" Jan 23 01:05:17.754432 containerd[1567]: time="2026-01-23T01:05:17.753726558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-thkld,Uid:29e4782e-15b5-4470-9ba4-cbf36b95c79b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b\"" Jan 23 01:05:17.766619 kubelet[2904]: E0123 01:05:17.761348 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:17.809340 containerd[1567]: time="2026-01-23T01:05:17.808475689Z" level=info msg="CreateContainer within sandbox \"ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:05:17.814035 containerd[1567]: time="2026-01-23T01:05:17.813662597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c554c8d6b-phpdd,Uid:baca7367-8f45-40cf-b782-1ff5b51a0c81,Namespace:calico-system,Attempt:0,} returns sandbox id \"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66\"" Jan 23 01:05:17.839971 containerd[1567]: time="2026-01-23T01:05:17.835359928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:05:17.894398 containerd[1567]: time="2026-01-23T01:05:17.890072858Z" level=info msg="Container e413c7ee33df265199ba69e67c7e6f422b2bf3e9169dfa6d36f1a7d966163fea: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:05:17.925589 containerd[1567]: time="2026-01-23T01:05:17.925396640Z" level=info msg="CreateContainer within sandbox \"ce036e1eb2ac11819e39631af745fa16d8ec105fff6b7a5ebb892d8c2b62420b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e413c7ee33df265199ba69e67c7e6f422b2bf3e9169dfa6d36f1a7d966163fea\"" Jan 23 01:05:17.930044 containerd[1567]: time="2026-01-23T01:05:17.927654348Z" level=info msg="StartContainer for \"e413c7ee33df265199ba69e67c7e6f422b2bf3e9169dfa6d36f1a7d966163fea\"" Jan 23 01:05:17.932758 containerd[1567]: time="2026-01-23T01:05:17.931560001Z" level=info msg="connecting to shim e413c7ee33df265199ba69e67c7e6f422b2bf3e9169dfa6d36f1a7d966163fea" address="unix:///run/containerd/s/08a3889a73c99011ab3fde97a9ed87a49682ea2dd4c4007cd3ff6c95d8b1e6f7" protocol=ttrpc version=3 Jan 23 01:05:17.970694 containerd[1567]: time="2026-01-23T01:05:17.970516150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:17.996268 containerd[1567]: time="2026-01-23T01:05:17.981701602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:05:18.008549 containerd[1567]: time="2026-01-23T01:05:18.008251039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:05:18.014135 kubelet[2904]: E0123 01:05:18.011709 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:05:18.014135 kubelet[2904]: E0123 01:05:18.012173 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:05:18.014286 kubelet[2904]: E0123 01:05:18.012462 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a705283100714243a961fb2d223d106b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rw9hw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c554c8d6b-phpdd_calico-system(baca7367-8f45-40cf-b782-1ff5b51a0c81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:18.016036 containerd[1567]: time="2026-01-23T01:05:18.015457596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:05:18.163082 systemd[1]: Started cri-containerd-e413c7ee33df265199ba69e67c7e6f422b2bf3e9169dfa6d36f1a7d966163fea.scope - libcontainer container e413c7ee33df265199ba69e67c7e6f422b2bf3e9169dfa6d36f1a7d966163fea. Jan 23 01:05:18.199046 containerd[1567]: time="2026-01-23T01:05:18.198684994Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:18.211641 containerd[1567]: time="2026-01-23T01:05:18.211553746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:05:18.212005 containerd[1567]: time="2026-01-23T01:05:18.211672915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:05:18.212662 kubelet[2904]: E0123 01:05:18.212490 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:05:18.212662 kubelet[2904]: E0123 01:05:18.212569 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:05:18.216671 kubelet[2904]: E0123 01:05:18.212740 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rw9hw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c554c8d6b-phpdd_calico-system(baca7367-8f45-40cf-b782-1ff5b51a0c81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:18.218311 kubelet[2904]: E0123 01:05:18.218068 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c554c8d6b-phpdd" podUID="baca7367-8f45-40cf-b782-1ff5b51a0c81" Jan 23 01:05:18.394305 containerd[1567]: time="2026-01-23T01:05:18.393599257Z" level=info msg="StartContainer for \"e413c7ee33df265199ba69e67c7e6f422b2bf3e9169dfa6d36f1a7d966163fea\" returns successfully" Jan 23 01:05:18.447673 containerd[1567]: time="2026-01-23T01:05:18.447556322Z" level=info msg="StopPodSandbox for \"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66\"" Jan 23 01:05:18.456722 kubelet[2904]: E0123 01:05:18.455129 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:18.635416 kubelet[2904]: I0123 01:05:18.634748 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-thkld" podStartSLOduration=136.634729716 podStartE2EDuration="2m16.634729716s" podCreationTimestamp="2026-01-23 01:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:05:18.629677582 +0000 UTC m=+141.975174231" watchObservedRunningTime="2026-01-23 01:05:18.634729716 +0000 UTC m=+141.980226366" Jan 23 01:05:18.700622 systemd-networkd[1485]: cali7e95d1d9506: Gained IPv6LL Jan 23 01:05:18.733096 systemd[1]: cri-containerd-2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66.scope: Deactivated successfully. Jan 23 01:05:18.790470 containerd[1567]: time="2026-01-23T01:05:18.764687437Z" level=info msg="received sandbox exit event container_id:\"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66\" id:\"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66\" exit_status:137 exited_at:{seconds:1769130318 nanos:761018405}" monitor_name=podsandbox Jan 23 01:05:19.013344 systemd-networkd[1485]: cali929e9a905ad: Gained IPv6LL Jan 23 01:05:19.096543 containerd[1567]: time="2026-01-23T01:05:19.096244956Z" level=error msg="Failed to get usage for snapshot \"c6800722e40314829f3134fd182dbc198c10472da5beeb7e4aefc86ed3bf3f36\"" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/73/fs/etc/service/enabled/bird/supervise/stat.new: no such file or directory" Jan 23 01:05:19.250673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66-rootfs.mount: Deactivated successfully. Jan 23 01:05:19.305448 systemd-networkd[1485]: cali6f2bd2a2021: Link UP Jan 23 01:05:19.311165 systemd-networkd[1485]: cali6f2bd2a2021: Gained carrier Jan 23 01:05:19.329895 containerd[1567]: time="2026-01-23T01:05:19.329460113Z" level=info msg="shim disconnected" id=2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66 namespace=k8s.io Jan 23 01:05:19.333585 containerd[1567]: time="2026-01-23T01:05:19.333476089Z" level=warning msg="cleaning up after shim disconnected" id=2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66 namespace=k8s.io Jan 23 01:05:19.342748 containerd[1567]: time="2026-01-23T01:05:19.333714014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:18.074 [INFO][4842] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:18.219 [INFO][4842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--5hvbp-eth0 goldmane-666569f655- calico-system 4d54e261-de28-4a61-bcdc-0ebb829e113e 1042 0 2026-01-23 01:04:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-5hvbp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6f2bd2a2021 [] [] }} ContainerID="a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" Namespace="calico-system" Pod="goldmane-666569f655-5hvbp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5hvbp-" Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:18.219 [INFO][4842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" Namespace="calico-system" Pod="goldmane-666569f655-5hvbp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5hvbp-eth0" Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:18.583 [INFO][4956] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" HandleID="k8s-pod-network.a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" Workload="localhost-k8s-goldmane--666569f655--5hvbp-eth0" Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:18.633 [INFO][4956] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" HandleID="k8s-pod-network.a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" Workload="localhost-k8s-goldmane--666569f655--5hvbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000518c40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-5hvbp", "timestamp":"2026-01-23 01:05:18.583228838 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:18.638 [INFO][4956] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:18.638 [INFO][4956] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:18.638 [INFO][4956] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:18.730 [INFO][4956] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" host="localhost" Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:18.824 [INFO][4956] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:18.911 [INFO][4956] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:18.931 [INFO][4956] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:18.949 [INFO][4956] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:18.949 [INFO][4956] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" host="localhost" Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:19.007 [INFO][4956] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497 Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:19.132 [INFO][4956] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" host="localhost" Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:19.198 [INFO][4956] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" host="localhost" Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:19.198 [INFO][4956] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" host="localhost" Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:19.198 [INFO][4956] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:05:19.452212 containerd[1567]: 2026-01-23 01:05:19.198 [INFO][4956] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" HandleID="k8s-pod-network.a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" Workload="localhost-k8s-goldmane--666569f655--5hvbp-eth0" Jan 23 01:05:19.457724 containerd[1567]: 2026-01-23 01:05:19.257 [INFO][4842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" Namespace="calico-system" Pod="goldmane-666569f655-5hvbp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5hvbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--5hvbp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4d54e261-de28-4a61-bcdc-0ebb829e113e", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-5hvbp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6f2bd2a2021", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:19.457724 containerd[1567]: 2026-01-23 01:05:19.266 [INFO][4842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" Namespace="calico-system" Pod="goldmane-666569f655-5hvbp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5hvbp-eth0" Jan 23 01:05:19.457724 containerd[1567]: 2026-01-23 01:05:19.268 [INFO][4842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f2bd2a2021 ContainerID="a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" Namespace="calico-system" Pod="goldmane-666569f655-5hvbp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5hvbp-eth0" Jan 23 01:05:19.457724 containerd[1567]: 2026-01-23 01:05:19.313 [INFO][4842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" Namespace="calico-system" Pod="goldmane-666569f655-5hvbp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5hvbp-eth0" Jan 23 01:05:19.457724 containerd[1567]: 2026-01-23 01:05:19.317 [INFO][4842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" Namespace="calico-system" Pod="goldmane-666569f655-5hvbp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5hvbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--5hvbp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4d54e261-de28-4a61-bcdc-0ebb829e113e", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497", Pod:"goldmane-666569f655-5hvbp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6f2bd2a2021", MAC:"ca:4e:24:29:d4:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:19.457724 containerd[1567]: 2026-01-23 01:05:19.404 [INFO][4842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" Namespace="calico-system" Pod="goldmane-666569f655-5hvbp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5hvbp-eth0" Jan 23 01:05:19.491928 kubelet[2904]: E0123 01:05:19.491635 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:19.556572 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66-shm.mount: Deactivated successfully. Jan 23 01:05:19.568269 containerd[1567]: time="2026-01-23T01:05:19.564365261Z" level=info msg="received sandbox container exit event sandbox_id:\"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66\" exit_status:137 exited_at:{seconds:1769130318 nanos:761018405}" monitor_name=criService Jan 23 01:05:19.829371 containerd[1567]: time="2026-01-23T01:05:19.828513853Z" level=info msg="connecting to shim a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497" address="unix:///run/containerd/s/736d516768fa31492d67f3a0ed7518f57363ad5b1d5344e45ae0645c9159c414" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:05:20.161208 systemd[1]: Started cri-containerd-a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497.scope - libcontainer container a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497. Jan 23 01:05:20.296679 systemd-networkd[1485]: cali7e95d1d9506: Link DOWN Jan 23 01:05:20.296694 systemd-networkd[1485]: cali7e95d1d9506: Lost carrier Jan 23 01:05:20.483189 systemd-resolved[1397]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:05:20.528732 kubelet[2904]: I0123 01:05:20.527375 2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Jan 23 01:05:20.538437 kubelet[2904]: E0123 01:05:20.535498 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:20.612533 systemd-networkd[1485]: cali6f2bd2a2021: Gained IPv6LL Jan 23 01:05:20.798174 containerd[1567]: time="2026-01-23T01:05:20.732680584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5hvbp,Uid:4d54e261-de28-4a61-bcdc-0ebb829e113e,Namespace:calico-system,Attempt:0,} returns sandbox id \"a59dc7ca0df7a1888608ee711cc681efe3c3cff1ae216954e4b68317750da497\"" Jan 23 01:05:20.798174 containerd[1567]: time="2026-01-23T01:05:20.749216327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:05:20.829613 containerd[1567]: time="2026-01-23T01:05:20.829436860Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:20.836938 containerd[1567]: time="2026-01-23T01:05:20.836223668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:05:20.836938 containerd[1567]: time="2026-01-23T01:05:20.836435232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:05:20.838910 kubelet[2904]: E0123 01:05:20.838512 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:05:20.838910 kubelet[2904]: E0123 01:05:20.838585 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:05:20.847636 kubelet[2904]: E0123 01:05:20.841291 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkbmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5hvbp_calico-system(4d54e261-de28-4a61-bcdc-0ebb829e113e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:20.852442 kubelet[2904]: E0123 01:05:20.851692 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:05:20.976923 containerd[1567]: 2026-01-23 01:05:20.266 [INFO][5077] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Jan 23 01:05:20.976923 containerd[1567]: 2026-01-23 01:05:20.270 [INFO][5077] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" iface="eth0" netns="/var/run/netns/cni-e86e2323-a4f5-a8a8-37f3-44d52b79dbf3" Jan 23 01:05:20.976923 containerd[1567]: 2026-01-23 01:05:20.274 [INFO][5077] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" iface="eth0" netns="/var/run/netns/cni-e86e2323-a4f5-a8a8-37f3-44d52b79dbf3" Jan 23 01:05:20.976923 containerd[1567]: 2026-01-23 01:05:20.329 [INFO][5077] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" after=58.28996ms iface="eth0" netns="/var/run/netns/cni-e86e2323-a4f5-a8a8-37f3-44d52b79dbf3" Jan 23 01:05:20.976923 containerd[1567]: 2026-01-23 01:05:20.329 [INFO][5077] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Jan 23 01:05:20.976923 containerd[1567]: 2026-01-23 01:05:20.329 [INFO][5077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Jan 23 01:05:20.976923 containerd[1567]: 2026-01-23 01:05:20.649 [INFO][5140] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" HandleID="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Workload="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:20.976923 containerd[1567]: 2026-01-23 01:05:20.668 [INFO][5140] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:05:20.976923 containerd[1567]: 2026-01-23 01:05:20.668 [INFO][5140] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:05:20.976923 containerd[1567]: 2026-01-23 01:05:20.924 [INFO][5140] ipam/ipam_plugin.go 455: Released address using handleID ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" HandleID="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Workload="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:20.976923 containerd[1567]: 2026-01-23 01:05:20.924 [INFO][5140] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" HandleID="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Workload="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:20.976923 containerd[1567]: 2026-01-23 01:05:20.935 [INFO][5140] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:05:20.976923 containerd[1567]: 2026-01-23 01:05:20.959 [INFO][5077] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Jan 23 01:05:20.980536 systemd[1]: run-netns-cni\x2de86e2323\x2da4f5\x2da8a8\x2d37f3\x2d44d52b79dbf3.mount: Deactivated successfully. Jan 23 01:05:20.998396 containerd[1567]: time="2026-01-23T01:05:20.997758817Z" level=info msg="TearDown network for sandbox \"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66\" successfully" Jan 23 01:05:20.998396 containerd[1567]: time="2026-01-23T01:05:20.997980542Z" level=info msg="StopPodSandbox for \"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66\" returns successfully" Jan 23 01:05:21.152427 kubelet[2904]: I0123 01:05:21.146155 2904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/baca7367-8f45-40cf-b782-1ff5b51a0c81-whisker-backend-key-pair\") pod \"baca7367-8f45-40cf-b782-1ff5b51a0c81\" (UID: \"baca7367-8f45-40cf-b782-1ff5b51a0c81\") " Jan 23 01:05:21.152427 kubelet[2904]: I0123 01:05:21.147361 2904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/baca7367-8f45-40cf-b782-1ff5b51a0c81-whisker-ca-bundle\") pod \"baca7367-8f45-40cf-b782-1ff5b51a0c81\" (UID: \"baca7367-8f45-40cf-b782-1ff5b51a0c81\") " Jan 23 01:05:21.152427 kubelet[2904]: I0123 01:05:21.147405 2904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw9hw\" (UniqueName: \"kubernetes.io/projected/baca7367-8f45-40cf-b782-1ff5b51a0c81-kube-api-access-rw9hw\") pod \"baca7367-8f45-40cf-b782-1ff5b51a0c81\" (UID: \"baca7367-8f45-40cf-b782-1ff5b51a0c81\") " Jan 23 01:05:21.152427 kubelet[2904]: I0123 01:05:21.150318 2904 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baca7367-8f45-40cf-b782-1ff5b51a0c81-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "baca7367-8f45-40cf-b782-1ff5b51a0c81" (UID: "baca7367-8f45-40cf-b782-1ff5b51a0c81"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:05:21.181343 systemd[1]: var-lib-kubelet-pods-baca7367\x2d8f45\x2d40cf\x2db782\x2d1ff5b51a0c81-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drw9hw.mount: Deactivated successfully. Jan 23 01:05:21.186149 kubelet[2904]: I0123 01:05:21.185723 2904 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baca7367-8f45-40cf-b782-1ff5b51a0c81-kube-api-access-rw9hw" (OuterVolumeSpecName: "kube-api-access-rw9hw") pod "baca7367-8f45-40cf-b782-1ff5b51a0c81" (UID: "baca7367-8f45-40cf-b782-1ff5b51a0c81"). InnerVolumeSpecName "kube-api-access-rw9hw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:05:21.188528 kubelet[2904]: I0123 01:05:21.187720 2904 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baca7367-8f45-40cf-b782-1ff5b51a0c81-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "baca7367-8f45-40cf-b782-1ff5b51a0c81" (UID: "baca7367-8f45-40cf-b782-1ff5b51a0c81"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:05:21.190202 systemd[1]: var-lib-kubelet-pods-baca7367\x2d8f45\x2d40cf\x2db782\x2d1ff5b51a0c81-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 01:05:21.251332 kubelet[2904]: I0123 01:05:21.251283 2904 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/baca7367-8f45-40cf-b782-1ff5b51a0c81-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 23 01:05:21.251515 kubelet[2904]: I0123 01:05:21.251479 2904 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rw9hw\" (UniqueName: \"kubernetes.io/projected/baca7367-8f45-40cf-b782-1ff5b51a0c81-kube-api-access-rw9hw\") on node \"localhost\" DevicePath \"\"" Jan 23 01:05:21.251515 kubelet[2904]: I0123 01:05:21.251495 2904 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/baca7367-8f45-40cf-b782-1ff5b51a0c81-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 23 01:05:21.541750 kubelet[2904]: E0123 01:05:21.537966 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:21.555349 kubelet[2904]: E0123 01:05:21.555263 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:05:21.608533 systemd[1]: Removed slice kubepods-besteffort-podbaca7367_8f45_40cf_b782_1ff5b51a0c81.slice - libcontainer container kubepods-besteffort-podbaca7367_8f45_40cf_b782_1ff5b51a0c81.slice. Jan 23 01:05:22.123207 systemd[1]: Created slice kubepods-besteffort-podf3049eb1_9735_4370_a74a_2cab9800bc64.slice - libcontainer container kubepods-besteffort-podf3049eb1_9735_4370_a74a_2cab9800bc64.slice. Jan 23 01:05:22.171853 kubelet[2904]: I0123 01:05:22.169912 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3049eb1-9735-4370-a74a-2cab9800bc64-whisker-ca-bundle\") pod \"whisker-57c574bd64-f4j4m\" (UID: \"f3049eb1-9735-4370-a74a-2cab9800bc64\") " pod="calico-system/whisker-57c574bd64-f4j4m" Jan 23 01:05:22.171853 kubelet[2904]: I0123 01:05:22.169980 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfm6n\" (UniqueName: \"kubernetes.io/projected/f3049eb1-9735-4370-a74a-2cab9800bc64-kube-api-access-rfm6n\") pod \"whisker-57c574bd64-f4j4m\" (UID: \"f3049eb1-9735-4370-a74a-2cab9800bc64\") " pod="calico-system/whisker-57c574bd64-f4j4m" Jan 23 01:05:22.171853 kubelet[2904]: I0123 01:05:22.170025 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f3049eb1-9735-4370-a74a-2cab9800bc64-whisker-backend-key-pair\") pod \"whisker-57c574bd64-f4j4m\" (UID: \"f3049eb1-9735-4370-a74a-2cab9800bc64\") " pod="calico-system/whisker-57c574bd64-f4j4m" Jan 23 01:05:22.439030 containerd[1567]: time="2026-01-23T01:05:22.437188504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57c574bd64-f4j4m,Uid:f3049eb1-9735-4370-a74a-2cab9800bc64,Namespace:calico-system,Attempt:0,}" Jan 23 01:05:22.566093 kubelet[2904]: E0123 01:05:22.566031 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:05:22.568714 kubelet[2904]: E0123 01:05:22.568198 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:22.570290 kubelet[2904]: E0123 01:05:22.570006 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:22.570317 systemd-networkd[1485]: vxlan.calico: Link UP Jan 23 01:05:22.570325 systemd-networkd[1485]: vxlan.calico: Gained carrier Jan 23 01:05:22.589892 containerd[1567]: time="2026-01-23T01:05:22.578388561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p82rz,Uid:26ba25fb-0e8b-48e9-998f-30a0f733f697,Namespace:kube-system,Attempt:0,}" Jan 23 01:05:23.519442 systemd-networkd[1485]: cali7c9604d79d0: Link UP Jan 23 01:05:23.522542 systemd-networkd[1485]: cali7c9604d79d0: Gained carrier Jan 23 01:05:23.609934 kubelet[2904]: I0123 01:05:23.608356 2904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baca7367-8f45-40cf-b782-1ff5b51a0c81" path="/var/lib/kubelet/pods/baca7367-8f45-40cf-b782-1ff5b51a0c81/volumes" Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:22.735 [INFO][5194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--57c574bd64--f4j4m-eth0 whisker-57c574bd64- calico-system f3049eb1-9735-4370-a74a-2cab9800bc64 1239 0 2026-01-23 01:05:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57c574bd64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-57c574bd64-f4j4m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7c9604d79d0 [] [] }} ContainerID="96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" Namespace="calico-system" Pod="whisker-57c574bd64-f4j4m" WorkloadEndpoint="localhost-k8s-whisker--57c574bd64--f4j4m-" Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:22.742 [INFO][5194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" Namespace="calico-system" Pod="whisker-57c574bd64-f4j4m" WorkloadEndpoint="localhost-k8s-whisker--57c574bd64--f4j4m-eth0" Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.010 [INFO][5230] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" HandleID="k8s-pod-network.96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" Workload="localhost-k8s-whisker--57c574bd64--f4j4m-eth0" Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.019 [INFO][5230] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" HandleID="k8s-pod-network.96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" Workload="localhost-k8s-whisker--57c574bd64--f4j4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-57c574bd64-f4j4m", "timestamp":"2026-01-23 01:05:23.010577038 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.020 [INFO][5230] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.020 [INFO][5230] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.020 [INFO][5230] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.109 [INFO][5230] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" host="localhost" Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.219 [INFO][5230] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.255 [INFO][5230] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.276 [INFO][5230] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.306 [INFO][5230] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.306 [INFO][5230] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" host="localhost" Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.328 [INFO][5230] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.351 [INFO][5230] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" host="localhost" Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.417 [INFO][5230] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" host="localhost" Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.421 [INFO][5230] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" host="localhost" Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.422 [INFO][5230] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:05:23.619549 containerd[1567]: 2026-01-23 01:05:23.422 [INFO][5230] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" HandleID="k8s-pod-network.96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" Workload="localhost-k8s-whisker--57c574bd64--f4j4m-eth0" Jan 23 01:05:23.621398 containerd[1567]: 2026-01-23 01:05:23.449 [INFO][5194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" Namespace="calico-system" Pod="whisker-57c574bd64-f4j4m" WorkloadEndpoint="localhost-k8s-whisker--57c574bd64--f4j4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57c574bd64--f4j4m-eth0", GenerateName:"whisker-57c574bd64-", Namespace:"calico-system", SelfLink:"", UID:"f3049eb1-9735-4370-a74a-2cab9800bc64", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 5, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57c574bd64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-57c574bd64-f4j4m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7c9604d79d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:23.621398 containerd[1567]: 2026-01-23 01:05:23.451 [INFO][5194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" Namespace="calico-system" Pod="whisker-57c574bd64-f4j4m" WorkloadEndpoint="localhost-k8s-whisker--57c574bd64--f4j4m-eth0" Jan 23 01:05:23.621398 containerd[1567]: 2026-01-23 01:05:23.451 [INFO][5194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c9604d79d0 ContainerID="96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" Namespace="calico-system" Pod="whisker-57c574bd64-f4j4m" WorkloadEndpoint="localhost-k8s-whisker--57c574bd64--f4j4m-eth0" Jan 23 01:05:23.621398 containerd[1567]: 2026-01-23 01:05:23.524 [INFO][5194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" Namespace="calico-system" Pod="whisker-57c574bd64-f4j4m" WorkloadEndpoint="localhost-k8s-whisker--57c574bd64--f4j4m-eth0" Jan 23 01:05:23.621398 containerd[1567]: 2026-01-23 01:05:23.526 [INFO][5194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" Namespace="calico-system" Pod="whisker-57c574bd64-f4j4m" WorkloadEndpoint="localhost-k8s-whisker--57c574bd64--f4j4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57c574bd64--f4j4m-eth0", GenerateName:"whisker-57c574bd64-", Namespace:"calico-system", SelfLink:"", UID:"f3049eb1-9735-4370-a74a-2cab9800bc64", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 5, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57c574bd64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb", Pod:"whisker-57c574bd64-f4j4m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7c9604d79d0", MAC:"aa:8b:73:53:0b:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:23.621398 containerd[1567]: 2026-01-23 01:05:23.572 [INFO][5194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" Namespace="calico-system" Pod="whisker-57c574bd64-f4j4m" WorkloadEndpoint="localhost-k8s-whisker--57c574bd64--f4j4m-eth0" Jan 23 01:05:23.850084 systemd-networkd[1485]: cali6dfa7340b12: Link UP Jan 23 01:05:23.856281 containerd[1567]: time="2026-01-23T01:05:23.856144558Z" level=info msg="connecting to shim 96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb" address="unix:///run/containerd/s/5f45fe89b4b70e55c56206a185155f1f77408c39ebf343feb6ce3195f5ba3cf1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:05:23.857294 systemd-networkd[1485]: cali6dfa7340b12: Gained carrier Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.067 [INFO][5209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--p82rz-eth0 coredns-674b8bbfcf- kube-system 26ba25fb-0e8b-48e9-998f-30a0f733f697 1040 0 2026-01-23 01:03:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-p82rz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6dfa7340b12 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-p82rz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p82rz-" Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.075 [INFO][5209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-p82rz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p82rz-eth0" Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.338 [INFO][5253] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" HandleID="k8s-pod-network.aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" Workload="localhost-k8s-coredns--674b8bbfcf--p82rz-eth0" Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.340 [INFO][5253] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" HandleID="k8s-pod-network.aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" Workload="localhost-k8s-coredns--674b8bbfcf--p82rz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000283c90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-p82rz", "timestamp":"2026-01-23 01:05:23.338757715 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.340 [INFO][5253] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.422 [INFO][5253] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.423 [INFO][5253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.471 [INFO][5253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" host="localhost" Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.560 [INFO][5253] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.625 [INFO][5253] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.639 [INFO][5253] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.653 [INFO][5253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.653 [INFO][5253] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" host="localhost" Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.684 [INFO][5253] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3 Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.713 [INFO][5253] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" host="localhost" Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.767 [INFO][5253] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" host="localhost" Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.768 [INFO][5253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" host="localhost" Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.768 [INFO][5253] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:05:23.970565 containerd[1567]: 2026-01-23 01:05:23.769 [INFO][5253] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" HandleID="k8s-pod-network.aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" Workload="localhost-k8s-coredns--674b8bbfcf--p82rz-eth0" Jan 23 01:05:23.972150 containerd[1567]: 2026-01-23 01:05:23.816 [INFO][5209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-p82rz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p82rz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--p82rz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"26ba25fb-0e8b-48e9-998f-30a0f733f697", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 3, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-p82rz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6dfa7340b12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:23.972150 containerd[1567]: 2026-01-23 01:05:23.820 [INFO][5209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-p82rz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p82rz-eth0" Jan 23 01:05:23.972150 containerd[1567]: 2026-01-23 01:05:23.820 [INFO][5209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6dfa7340b12 ContainerID="aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-p82rz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p82rz-eth0" Jan 23 01:05:23.972150 containerd[1567]: 2026-01-23 01:05:23.862 [INFO][5209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-p82rz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p82rz-eth0" Jan 23 01:05:23.972150 containerd[1567]: 2026-01-23 01:05:23.868 [INFO][5209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-p82rz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p82rz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--p82rz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"26ba25fb-0e8b-48e9-998f-30a0f733f697", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 3, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3", Pod:"coredns-674b8bbfcf-p82rz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6dfa7340b12", MAC:"7a:dd:9d:c7:94:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:23.974161 containerd[1567]: 2026-01-23 01:05:23.958 [INFO][5209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-p82rz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p82rz-eth0" Jan 23 01:05:24.054717 systemd[1]: Started cri-containerd-96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb.scope - libcontainer container 96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb. Jan 23 01:05:24.201994 systemd-resolved[1397]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:05:24.214989 containerd[1567]: time="2026-01-23T01:05:24.214066870Z" level=info msg="connecting to shim aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3" address="unix:///run/containerd/s/82804bdf3cae6b2b589b77a03f2614839e329ce4d7acc83a0b865eaa734955fc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:05:24.374715 systemd[1]: Started cri-containerd-aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3.scope - libcontainer container aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3. Jan 23 01:05:24.397498 systemd-networkd[1485]: vxlan.calico: Gained IPv6LL Jan 23 01:05:24.504734 systemd-resolved[1397]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:05:24.540525 containerd[1567]: time="2026-01-23T01:05:24.539169448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57c574bd64-f4j4m,Uid:f3049eb1-9735-4370-a74a-2cab9800bc64,Namespace:calico-system,Attempt:0,} returns sandbox id \"96bf6e2e4b4172e348ba0799523e407c2bbd6e046006b658646cd67d86adafbb\"" Jan 23 01:05:24.546639 containerd[1567]: time="2026-01-23T01:05:24.546185859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:05:24.584500 containerd[1567]: time="2026-01-23T01:05:24.582648088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-47gkr,Uid:aa11cfa4-c767-44e1-bc2c-24c685ae9875,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:05:24.586750 containerd[1567]: time="2026-01-23T01:05:24.584674009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-d54m6,Uid:3c57c36f-e9c4-4469-830b-86d51909b784,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:05:24.648673 systemd-networkd[1485]: cali7c9604d79d0: Gained IPv6LL Jan 23 01:05:24.680498 containerd[1567]: time="2026-01-23T01:05:24.676583500Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:24.686597 containerd[1567]: time="2026-01-23T01:05:24.682538742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:05:24.686597 containerd[1567]: time="2026-01-23T01:05:24.683393969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:05:24.688573 kubelet[2904]: E0123 01:05:24.687515 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:05:24.693001 kubelet[2904]: E0123 01:05:24.692962 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:05:24.696942 kubelet[2904]: E0123 01:05:24.696092 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a705283100714243a961fb2d223d106b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rfm6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57c574bd64-f4j4m_calico-system(f3049eb1-9735-4370-a74a-2cab9800bc64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:24.703931 containerd[1567]: time="2026-01-23T01:05:24.702033980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:05:24.824662 containerd[1567]: time="2026-01-23T01:05:24.821169621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:24.844437 containerd[1567]: time="2026-01-23T01:05:24.842941227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p82rz,Uid:26ba25fb-0e8b-48e9-998f-30a0f733f697,Namespace:kube-system,Attempt:0,} returns sandbox id \"aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3\"" Jan 23 01:05:24.848097 kubelet[2904]: E0123 01:05:24.845594 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:24.848418 containerd[1567]: time="2026-01-23T01:05:24.847496455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:05:24.848418 containerd[1567]: time="2026-01-23T01:05:24.847575736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:05:24.849742 kubelet[2904]: E0123 01:05:24.849372 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:05:24.849742 kubelet[2904]: E0123 01:05:24.849423 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:05:24.849742 kubelet[2904]: E0123 01:05:24.849558 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfm6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57c574bd64-f4j4m_calico-system(f3049eb1-9735-4370-a74a-2cab9800bc64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:24.851475 kubelet[2904]: E0123 01:05:24.851366 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:05:24.896737 containerd[1567]: time="2026-01-23T01:05:24.895039870Z" level=info msg="CreateContainer within sandbox \"aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:05:24.965401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713057811.mount: Deactivated successfully. Jan 23 01:05:24.971102 containerd[1567]: time="2026-01-23T01:05:24.970980638Z" level=info msg="Container 5058ff79c1137a3ceb3fb67c11bcf798007d0ab8c86ed29449c6ce6cb2e196b8: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:05:24.971439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597836284.mount: Deactivated successfully. Jan 23 01:05:25.012994 containerd[1567]: time="2026-01-23T01:05:25.012458859Z" level=info msg="CreateContainer within sandbox \"aeb2e349f98f0786de0c8d00228c8dd09148191dd4bacd62ce8592c6dfcf22a3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5058ff79c1137a3ceb3fb67c11bcf798007d0ab8c86ed29449c6ce6cb2e196b8\"" Jan 23 01:05:25.035563 containerd[1567]: time="2026-01-23T01:05:25.035006413Z" level=info msg="StartContainer for \"5058ff79c1137a3ceb3fb67c11bcf798007d0ab8c86ed29449c6ce6cb2e196b8\"" Jan 23 01:05:25.038483 containerd[1567]: time="2026-01-23T01:05:25.038449340Z" level=info msg="connecting to shim 5058ff79c1137a3ceb3fb67c11bcf798007d0ab8c86ed29449c6ce6cb2e196b8" address="unix:///run/containerd/s/82804bdf3cae6b2b589b77a03f2614839e329ce4d7acc83a0b865eaa734955fc" protocol=ttrpc version=3 Jan 23 01:05:25.158408 systemd-networkd[1485]: cali6dfa7340b12: Gained IPv6LL Jan 23 01:05:25.343677 systemd[1]: Started cri-containerd-5058ff79c1137a3ceb3fb67c11bcf798007d0ab8c86ed29449c6ce6cb2e196b8.scope - libcontainer container 5058ff79c1137a3ceb3fb67c11bcf798007d0ab8c86ed29449c6ce6cb2e196b8. Jan 23 01:05:25.580020 containerd[1567]: time="2026-01-23T01:05:25.579434014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pk4tl,Uid:77cee7a3-d314-42b2-8d1b-22ce21da8d56,Namespace:calico-system,Attempt:0,}" Jan 23 01:05:25.787103 kubelet[2904]: E0123 01:05:25.785126 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:05:26.101077 containerd[1567]: time="2026-01-23T01:05:26.100529244Z" level=info msg="StartContainer for \"5058ff79c1137a3ceb3fb67c11bcf798007d0ab8c86ed29449c6ce6cb2e196b8\" returns successfully" Jan 23 01:05:26.208194 systemd-networkd[1485]: calid712f2edf01: Link UP Jan 23 01:05:26.208753 systemd-networkd[1485]: calid712f2edf01: Gained carrier Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:25.060 [INFO][5362] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cd579f464--47gkr-eth0 calico-apiserver-6cd579f464- calico-apiserver aa11cfa4-c767-44e1-bc2c-24c685ae9875 1045 0 2026-01-23 01:04:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cd579f464 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cd579f464-47gkr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid712f2edf01 [] [] }} ContainerID="60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-47gkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--47gkr-" Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:25.060 [INFO][5362] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-47gkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--47gkr-eth0" Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:25.579 [INFO][5422] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" HandleID="k8s-pod-network.60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" Workload="localhost-k8s-calico--apiserver--6cd579f464--47gkr-eth0" Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:25.607 [INFO][5422] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" HandleID="k8s-pod-network.60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" Workload="localhost-k8s-calico--apiserver--6cd579f464--47gkr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a6d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6cd579f464-47gkr", "timestamp":"2026-01-23 01:05:25.579607484 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:25.619 [INFO][5422] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:25.622 [INFO][5422] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:25.622 [INFO][5422] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:25.834 [INFO][5422] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" host="localhost" Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:25.896 [INFO][5422] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:25.937 [INFO][5422] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:25.978 [INFO][5422] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:26.000 [INFO][5422] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:26.001 [INFO][5422] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" host="localhost" Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:26.017 [INFO][5422] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828 Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:26.071 [INFO][5422] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" host="localhost" Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:26.131 [INFO][5422] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" host="localhost" Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:26.135 [INFO][5422] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" host="localhost" Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:26.136 [INFO][5422] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:05:26.286077 containerd[1567]: 2026-01-23 01:05:26.136 [INFO][5422] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" HandleID="k8s-pod-network.60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" Workload="localhost-k8s-calico--apiserver--6cd579f464--47gkr-eth0" Jan 23 01:05:26.288269 containerd[1567]: 2026-01-23 01:05:26.162 [INFO][5362] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-47gkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--47gkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cd579f464--47gkr-eth0", GenerateName:"calico-apiserver-6cd579f464-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa11cfa4-c767-44e1-bc2c-24c685ae9875", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd579f464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cd579f464-47gkr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid712f2edf01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:26.288269 containerd[1567]: 2026-01-23 01:05:26.162 [INFO][5362] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-47gkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--47gkr-eth0" Jan 23 01:05:26.288269 containerd[1567]: 2026-01-23 01:05:26.163 [INFO][5362] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid712f2edf01 ContainerID="60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-47gkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--47gkr-eth0" Jan 23 01:05:26.288269 containerd[1567]: 2026-01-23 01:05:26.208 [INFO][5362] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-47gkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--47gkr-eth0" Jan 23 01:05:26.288269 containerd[1567]: 2026-01-23 01:05:26.214 [INFO][5362] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-47gkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--47gkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cd579f464--47gkr-eth0", GenerateName:"calico-apiserver-6cd579f464-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa11cfa4-c767-44e1-bc2c-24c685ae9875", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd579f464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828", Pod:"calico-apiserver-6cd579f464-47gkr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid712f2edf01", MAC:"7a:a2:2a:8c:b5:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:26.288269 containerd[1567]: 2026-01-23 01:05:26.269 [INFO][5362] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-47gkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--47gkr-eth0" Jan 23 01:05:26.441109 containerd[1567]: time="2026-01-23T01:05:26.438180052Z" level=info msg="connecting to shim 60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828" address="unix:///run/containerd/s/80d0a0f581ae1d523a65947b31cace5131e941202f5d2a24e406b03b5b183921" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:05:26.572275 kubelet[2904]: E0123 01:05:26.568449 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:26.572527 containerd[1567]: time="2026-01-23T01:05:26.571415108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcc89fd94-gvlr2,Uid:5ea72ad9-04e5-48e1-a1f3-bd44567b901e,Namespace:calico-system,Attempt:0,}" Jan 23 01:05:26.571185 systemd-networkd[1485]: calia456f8d1bdc: Link UP Jan 23 01:05:26.597980 systemd-networkd[1485]: calia456f8d1bdc: Gained carrier Jan 23 01:05:26.800152 systemd[1]: Started cri-containerd-60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828.scope - libcontainer container 60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828. Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:25.272 [INFO][5366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cd579f464--d54m6-eth0 calico-apiserver-6cd579f464- calico-apiserver 3c57c36f-e9c4-4469-830b-86d51909b784 1044 0 2026-01-23 01:04:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cd579f464 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cd579f464-d54m6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia456f8d1bdc [] [] }} ContainerID="2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-d54m6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--d54m6-" Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:25.294 [INFO][5366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-d54m6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--d54m6-eth0" Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:25.986 [INFO][5437] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" HandleID="k8s-pod-network.2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" Workload="localhost-k8s-calico--apiserver--6cd579f464--d54m6-eth0" Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:25.987 [INFO][5437] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" HandleID="k8s-pod-network.2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" Workload="localhost-k8s-calico--apiserver--6cd579f464--d54m6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003067f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6cd579f464-d54m6", "timestamp":"2026-01-23 01:05:25.98651053 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:25.987 [INFO][5437] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:26.137 [INFO][5437] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:26.138 [INFO][5437] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:26.195 [INFO][5437] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" host="localhost" Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:26.257 [INFO][5437] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:26.297 [INFO][5437] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:26.322 [INFO][5437] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:26.352 [INFO][5437] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:26.353 [INFO][5437] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" host="localhost" Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:26.372 [INFO][5437] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96 Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:26.426 [INFO][5437] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" host="localhost" Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:26.467 [INFO][5437] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" host="localhost" Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:26.467 [INFO][5437] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" host="localhost" Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:26.467 [INFO][5437] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:05:26.808082 containerd[1567]: 2026-01-23 01:05:26.467 [INFO][5437] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" HandleID="k8s-pod-network.2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" Workload="localhost-k8s-calico--apiserver--6cd579f464--d54m6-eth0" Jan 23 01:05:26.811004 containerd[1567]: 2026-01-23 01:05:26.519 [INFO][5366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-d54m6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--d54m6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cd579f464--d54m6-eth0", GenerateName:"calico-apiserver-6cd579f464-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c57c36f-e9c4-4469-830b-86d51909b784", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd579f464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cd579f464-d54m6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia456f8d1bdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:26.811004 containerd[1567]: 2026-01-23 01:05:26.523 [INFO][5366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-d54m6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--d54m6-eth0" Jan 23 01:05:26.811004 containerd[1567]: 2026-01-23 01:05:26.524 [INFO][5366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia456f8d1bdc ContainerID="2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-d54m6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--d54m6-eth0" Jan 23 01:05:26.811004 containerd[1567]: 2026-01-23 01:05:26.604 [INFO][5366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-d54m6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--d54m6-eth0" Jan 23 01:05:26.811004 containerd[1567]: 2026-01-23 01:05:26.639 [INFO][5366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-d54m6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--d54m6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cd579f464--d54m6-eth0", GenerateName:"calico-apiserver-6cd579f464-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c57c36f-e9c4-4469-830b-86d51909b784", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd579f464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96", Pod:"calico-apiserver-6cd579f464-d54m6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia456f8d1bdc", MAC:"46:5f:7a:90:4a:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:26.811004 containerd[1567]: 2026-01-23 01:05:26.684 [INFO][5366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" Namespace="calico-apiserver" Pod="calico-apiserver-6cd579f464-d54m6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd579f464--d54m6-eth0" Jan 23 01:05:26.811759 kubelet[2904]: E0123 01:05:26.809560 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:26.911086 kubelet[2904]: I0123 01:05:26.910529 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-p82rz" podStartSLOduration=144.90653648 podStartE2EDuration="2m24.90653648s" podCreationTimestamp="2026-01-23 01:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:05:26.904580247 +0000 UTC m=+150.250076917" watchObservedRunningTime="2026-01-23 01:05:26.90653648 +0000 UTC m=+150.252033131" Jan 23 01:05:27.067164 containerd[1567]: time="2026-01-23T01:05:27.065684486Z" level=info msg="connecting to shim 2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96" address="unix:///run/containerd/s/8a59f14e9e481cf6fb19a05f54b52339238efa5d46dc7ba0fb84b119a0b2f754" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:05:27.160244 systemd-networkd[1485]: calif0d38a573c9: Link UP Jan 23 01:05:27.166612 systemd-networkd[1485]: calif0d38a573c9: Gained carrier Jan 23 01:05:27.269055 systemd-resolved[1397]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:05:27.325454 systemd[1]: Started cri-containerd-2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96.scope - libcontainer container 2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96. Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.091 [INFO][5462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pk4tl-eth0 csi-node-driver- calico-system 77cee7a3-d314-42b2-8d1b-22ce21da8d56 897 0 2026-01-23 01:04:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pk4tl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif0d38a573c9 [] [] }} ContainerID="678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" Namespace="calico-system" Pod="csi-node-driver-pk4tl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk4tl-" Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.094 [INFO][5462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" Namespace="calico-system" Pod="csi-node-driver-pk4tl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk4tl-eth0" Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.282 [INFO][5507] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" HandleID="k8s-pod-network.678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" Workload="localhost-k8s-csi--node--driver--pk4tl-eth0" Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.289 [INFO][5507] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" HandleID="k8s-pod-network.678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" Workload="localhost-k8s-csi--node--driver--pk4tl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000415b30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pk4tl", "timestamp":"2026-01-23 01:05:26.282145247 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.289 [INFO][5507] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.467 [INFO][5507] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.468 [INFO][5507] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.541 [INFO][5507] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" host="localhost" Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.791 [INFO][5507] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.831 [INFO][5507] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.848 [INFO][5507] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.905 [INFO][5507] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.909 [INFO][5507] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" host="localhost" Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.930 [INFO][5507] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52 Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:26.998 [INFO][5507] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" host="localhost" Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:27.068 [INFO][5507] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" host="localhost" Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:27.068 [INFO][5507] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" host="localhost" Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:27.069 [INFO][5507] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:05:27.342548 containerd[1567]: 2026-01-23 01:05:27.069 [INFO][5507] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" HandleID="k8s-pod-network.678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" Workload="localhost-k8s-csi--node--driver--pk4tl-eth0" Jan 23 01:05:27.351505 containerd[1567]: 2026-01-23 01:05:27.126 [INFO][5462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" Namespace="calico-system" Pod="csi-node-driver-pk4tl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk4tl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pk4tl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77cee7a3-d314-42b2-8d1b-22ce21da8d56", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pk4tl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0d38a573c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:27.351505 containerd[1567]: 2026-01-23 01:05:27.126 [INFO][5462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" Namespace="calico-system" Pod="csi-node-driver-pk4tl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk4tl-eth0" Jan 23 01:05:27.351505 containerd[1567]: 2026-01-23 01:05:27.126 [INFO][5462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0d38a573c9 ContainerID="678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" Namespace="calico-system" Pod="csi-node-driver-pk4tl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk4tl-eth0" Jan 23 01:05:27.351505 containerd[1567]: 2026-01-23 01:05:27.180 [INFO][5462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" Namespace="calico-system" Pod="csi-node-driver-pk4tl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk4tl-eth0" Jan 23 01:05:27.351505 containerd[1567]: 2026-01-23 01:05:27.207 [INFO][5462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" Namespace="calico-system" Pod="csi-node-driver-pk4tl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk4tl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pk4tl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77cee7a3-d314-42b2-8d1b-22ce21da8d56", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52", Pod:"csi-node-driver-pk4tl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0d38a573c9", MAC:"e6:ee:d8:16:4f:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:27.351505 containerd[1567]: 2026-01-23 01:05:27.331 [INFO][5462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" Namespace="calico-system" Pod="csi-node-driver-pk4tl" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk4tl-eth0" Jan 23 01:05:27.584177 systemd-resolved[1397]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:05:27.653596 systemd-networkd[1485]: calia456f8d1bdc: Gained IPv6LL Jan 23 01:05:27.711659 containerd[1567]: time="2026-01-23T01:05:27.711522270Z" level=info msg="connecting to shim 678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52" address="unix:///run/containerd/s/3e1aee8e646569b94e2e7fb1f010d469a9be255dec8af7318e212972e284c0d8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:05:27.855975 kubelet[2904]: E0123 01:05:27.855586 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:27.859296 containerd[1567]: time="2026-01-23T01:05:27.859108235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-47gkr,Uid:aa11cfa4-c767-44e1-bc2c-24c685ae9875,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"60fa9cbdadf32d60b02e4ffa369c8cf0764165db7376bc4e003aa658e9907828\"" Jan 23 01:05:27.869898 containerd[1567]: time="2026-01-23T01:05:27.868721630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:05:28.041211 systemd-networkd[1485]: calid712f2edf01: Gained IPv6LL Jan 23 01:05:28.178028 systemd-networkd[1485]: cali54c787d2ed7: Link UP Jan 23 01:05:28.184582 systemd-networkd[1485]: cali54c787d2ed7: Gained carrier Jan 23 01:05:28.232742 systemd-networkd[1485]: calif0d38a573c9: Gained IPv6LL Jan 23 01:05:28.292050 systemd[1]: Started cri-containerd-678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52.scope - libcontainer container 678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52. Jan 23 01:05:28.308163 containerd[1567]: time="2026-01-23T01:05:28.307622052Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:28.325102 containerd[1567]: time="2026-01-23T01:05:28.324932394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:05:28.325102 containerd[1567]: time="2026-01-23T01:05:28.325063085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:05:28.330051 kubelet[2904]: E0123 01:05:28.329594 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:05:28.330175 kubelet[2904]: E0123 01:05:28.330154 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:05:28.330662 kubelet[2904]: E0123 01:05:28.330530 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7xws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cd579f464-47gkr_calico-apiserver(aa11cfa4-c767-44e1-bc2c-24c685ae9875): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:28.333191 kubelet[2904]: E0123 01:05:28.332678 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:05:28.429578 containerd[1567]: time="2026-01-23T01:05:28.429288174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd579f464-d54m6,Uid:3c57c36f-e9c4-4469-830b-86d51909b784,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2b9e3d4b3c449bf21af6ab93f9face3be3b1e2da9dda7d68758ad28ca845de96\"" Jan 23 01:05:28.456056 containerd[1567]: time="2026-01-23T01:05:28.449306196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.073 [INFO][5561] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5dcc89fd94--gvlr2-eth0 calico-kube-controllers-5dcc89fd94- calico-system 5ea72ad9-04e5-48e1-a1f3-bd44567b901e 1048 0 2026-01-23 01:04:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5dcc89fd94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5dcc89fd94-gvlr2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali54c787d2ed7 [] [] }} ContainerID="89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" Namespace="calico-system" Pod="calico-kube-controllers-5dcc89fd94-gvlr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc89fd94--gvlr2-" Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.076 [INFO][5561] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" Namespace="calico-system" Pod="calico-kube-controllers-5dcc89fd94-gvlr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc89fd94--gvlr2-eth0" Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.435 [INFO][5616] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" HandleID="k8s-pod-network.89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" Workload="localhost-k8s-calico--kube--controllers--5dcc89fd94--gvlr2-eth0" Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.436 [INFO][5616] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" HandleID="k8s-pod-network.89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" Workload="localhost-k8s-calico--kube--controllers--5dcc89fd94--gvlr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139a70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5dcc89fd94-gvlr2", "timestamp":"2026-01-23 01:05:27.435553709 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.436 [INFO][5616] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.437 [INFO][5616] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.437 [INFO][5616] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.497 [INFO][5616] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" host="localhost" Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.528 [INFO][5616] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.565 [INFO][5616] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.622 [INFO][5616] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.667 [INFO][5616] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.687 [INFO][5616] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" host="localhost" Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.717 [INFO][5616] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287 Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.762 [INFO][5616] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" host="localhost" Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.916 [INFO][5616] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" host="localhost" Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.922 [INFO][5616] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" host="localhost" Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.922 [INFO][5616] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:05:28.546332 containerd[1567]: 2026-01-23 01:05:27.922 [INFO][5616] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" HandleID="k8s-pod-network.89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" Workload="localhost-k8s-calico--kube--controllers--5dcc89fd94--gvlr2-eth0" Jan 23 01:05:28.558273 containerd[1567]: 2026-01-23 01:05:27.946 [INFO][5561] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" Namespace="calico-system" Pod="calico-kube-controllers-5dcc89fd94-gvlr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc89fd94--gvlr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5dcc89fd94--gvlr2-eth0", GenerateName:"calico-kube-controllers-5dcc89fd94-", Namespace:"calico-system", SelfLink:"", UID:"5ea72ad9-04e5-48e1-a1f3-bd44567b901e", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dcc89fd94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5dcc89fd94-gvlr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali54c787d2ed7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:28.558273 containerd[1567]: 2026-01-23 01:05:27.947 [INFO][5561] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" Namespace="calico-system" Pod="calico-kube-controllers-5dcc89fd94-gvlr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc89fd94--gvlr2-eth0" Jan 23 01:05:28.558273 containerd[1567]: 2026-01-23 01:05:27.947 [INFO][5561] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54c787d2ed7 ContainerID="89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" Namespace="calico-system" Pod="calico-kube-controllers-5dcc89fd94-gvlr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc89fd94--gvlr2-eth0" Jan 23 01:05:28.558273 containerd[1567]: 2026-01-23 01:05:28.348 [INFO][5561] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" Namespace="calico-system" Pod="calico-kube-controllers-5dcc89fd94-gvlr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc89fd94--gvlr2-eth0" Jan 23 01:05:28.558273 containerd[1567]: 2026-01-23 01:05:28.415 [INFO][5561] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" Namespace="calico-system" Pod="calico-kube-controllers-5dcc89fd94-gvlr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc89fd94--gvlr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5dcc89fd94--gvlr2-eth0", GenerateName:"calico-kube-controllers-5dcc89fd94-", Namespace:"calico-system", SelfLink:"", UID:"5ea72ad9-04e5-48e1-a1f3-bd44567b901e", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dcc89fd94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287", Pod:"calico-kube-controllers-5dcc89fd94-gvlr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali54c787d2ed7", MAC:"1e:07:d9:94:14:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:05:28.558273 containerd[1567]: 2026-01-23 01:05:28.501 [INFO][5561] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" Namespace="calico-system" Pod="calico-kube-controllers-5dcc89fd94-gvlr2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc89fd94--gvlr2-eth0" Jan 23 01:05:28.601327 containerd[1567]: time="2026-01-23T01:05:28.600225385Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:28.613621 containerd[1567]: time="2026-01-23T01:05:28.612939914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:05:28.613621 containerd[1567]: time="2026-01-23T01:05:28.612996734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:05:28.618014 kubelet[2904]: E0123 01:05:28.615738 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:05:28.618014 kubelet[2904]: E0123 01:05:28.616012 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:05:28.618955 kubelet[2904]: E0123 01:05:28.618346 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2cws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cd579f464-d54m6_calico-apiserver(3c57c36f-e9c4-4469-830b-86d51909b784): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:28.621954 kubelet[2904]: E0123 01:05:28.621562 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:05:28.802335 systemd-resolved[1397]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:05:28.821266 containerd[1567]: time="2026-01-23T01:05:28.806965167Z" level=info msg="connecting to shim 89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287" address="unix:///run/containerd/s/fb7b9d967075e0af94c1e894915062e3ef9b41264f5731216b2e0e3a76425dfe" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:05:28.886585 kubelet[2904]: E0123 01:05:28.886091 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:28.911246 kubelet[2904]: E0123 01:05:28.909313 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:05:28.911246 kubelet[2904]: E0123 01:05:28.909592 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:05:29.072357 containerd[1567]: time="2026-01-23T01:05:29.070183724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pk4tl,Uid:77cee7a3-d314-42b2-8d1b-22ce21da8d56,Namespace:calico-system,Attempt:0,} returns sandbox id \"678f79f29ab7f614c0738e760db4ebe1f9b6eb6256a17f275563f0eefd595f52\"" Jan 23 01:05:29.083537 containerd[1567]: time="2026-01-23T01:05:29.083124707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:05:29.143143 systemd[1]: Started cri-containerd-89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287.scope - libcontainer container 89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287. Jan 23 01:05:29.164125 containerd[1567]: time="2026-01-23T01:05:29.163714579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:29.170291 containerd[1567]: time="2026-01-23T01:05:29.168526223Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:05:29.170291 containerd[1567]: time="2026-01-23T01:05:29.168642153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:05:29.170611 kubelet[2904]: E0123 01:05:29.169948 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:05:29.170611 kubelet[2904]: E0123 01:05:29.170002 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:05:29.170611 kubelet[2904]: E0123 01:05:29.170151 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqkwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:29.176924 containerd[1567]: time="2026-01-23T01:05:29.174115979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:05:29.235945 systemd-resolved[1397]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:05:29.260034 containerd[1567]: time="2026-01-23T01:05:29.259988829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:29.293929 containerd[1567]: time="2026-01-23T01:05:29.293723409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:05:29.294206 containerd[1567]: time="2026-01-23T01:05:29.294160595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:05:29.295652 kubelet[2904]: E0123 01:05:29.294539 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:05:29.295652 kubelet[2904]: E0123 01:05:29.294674 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:05:29.296830 kubelet[2904]: E0123 01:05:29.296058 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqkwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:29.298958 kubelet[2904]: E0123 01:05:29.298556 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:05:29.410278 containerd[1567]: time="2026-01-23T01:05:29.409931512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcc89fd94-gvlr2,Uid:5ea72ad9-04e5-48e1-a1f3-bd44567b901e,Namespace:calico-system,Attempt:0,} returns sandbox id \"89b39e6323fcc96fa072835b7a0228b0fe53334e82b3fd6d12522b39a8b44287\"" Jan 23 01:05:29.427666 containerd[1567]: time="2026-01-23T01:05:29.427602003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:05:29.541995 containerd[1567]: time="2026-01-23T01:05:29.537698351Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:29.554593 containerd[1567]: time="2026-01-23T01:05:29.554281344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:05:29.554593 containerd[1567]: time="2026-01-23T01:05:29.554418275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:05:29.555050 kubelet[2904]: E0123 01:05:29.554671 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:05:29.555050 kubelet[2904]: E0123 01:05:29.554725 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:05:29.556229 kubelet[2904]: E0123 01:05:29.555193 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbdvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5dcc89fd94-gvlr2_calico-system(5ea72ad9-04e5-48e1-a1f3-bd44567b901e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:29.561284 kubelet[2904]: E0123 01:05:29.556536 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:05:29.893348 systemd-networkd[1485]: cali54c787d2ed7: Gained IPv6LL Jan 23 01:05:29.909748 kubelet[2904]: E0123 01:05:29.908608 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:05:29.921090 kubelet[2904]: E0123 01:05:29.920954 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:05:29.921978 kubelet[2904]: E0123 01:05:29.921573 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:29.946034 kubelet[2904]: E0123 01:05:29.945630 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:05:30.943489 kubelet[2904]: E0123 01:05:30.943109 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:05:30.982191 kubelet[2904]: E0123 01:05:30.979285 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:05:33.566197 containerd[1567]: time="2026-01-23T01:05:33.565037825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:05:33.716749 containerd[1567]: time="2026-01-23T01:05:33.715539545Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:33.722393 containerd[1567]: time="2026-01-23T01:05:33.722238546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:05:33.722393 containerd[1567]: time="2026-01-23T01:05:33.722349630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:05:33.724424 kubelet[2904]: E0123 01:05:33.724258 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:05:33.725164 kubelet[2904]: E0123 01:05:33.724418 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:05:33.725164 kubelet[2904]: E0123 01:05:33.724644 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkbmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5hvbp_calico-system(4d54e261-de28-4a61-bcdc-0ebb829e113e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:33.726177 kubelet[2904]: E0123 01:05:33.726059 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:05:34.567115 kubelet[2904]: E0123 01:05:34.564136 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:36.565467 kubelet[2904]: E0123 01:05:36.563339 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:37.593419 containerd[1567]: time="2026-01-23T01:05:37.593206182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:05:37.706944 containerd[1567]: time="2026-01-23T01:05:37.706705696Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:37.731494 containerd[1567]: time="2026-01-23T01:05:37.725965389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:05:37.731494 containerd[1567]: time="2026-01-23T01:05:37.726086120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:05:37.731716 kubelet[2904]: E0123 01:05:37.727155 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:05:37.731716 kubelet[2904]: E0123 01:05:37.727231 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:05:37.731716 kubelet[2904]: E0123 01:05:37.727382 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a705283100714243a961fb2d223d106b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rfm6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57c574bd64-f4j4m_calico-system(f3049eb1-9735-4370-a74a-2cab9800bc64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:37.737196 containerd[1567]: time="2026-01-23T01:05:37.737166585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:05:37.873010 containerd[1567]: time="2026-01-23T01:05:37.870223127Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:37.907488 containerd[1567]: time="2026-01-23T01:05:37.905599422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:05:37.907488 containerd[1567]: time="2026-01-23T01:05:37.905730803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:05:37.908633 kubelet[2904]: E0123 01:05:37.908521 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:05:37.908633 kubelet[2904]: E0123 01:05:37.908595 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:05:37.914539 kubelet[2904]: E0123 01:05:37.908756 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfm6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57c574bd64-f4j4m_calico-system(f3049eb1-9735-4370-a74a-2cab9800bc64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:37.914539 kubelet[2904]: E0123 01:05:37.910245 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:05:42.604640 containerd[1567]: time="2026-01-23T01:05:42.601929330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:05:42.779231 containerd[1567]: time="2026-01-23T01:05:42.779081715Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:42.799703 containerd[1567]: time="2026-01-23T01:05:42.799508137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:05:42.799703 containerd[1567]: time="2026-01-23T01:05:42.799628750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:05:42.802925 kubelet[2904]: E0123 01:05:42.801230 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:05:42.802925 kubelet[2904]: E0123 01:05:42.801317 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:05:42.802925 kubelet[2904]: E0123 01:05:42.801501 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbdvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5dcc89fd94-gvlr2_calico-system(5ea72ad9-04e5-48e1-a1f3-bd44567b901e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:42.812477 kubelet[2904]: E0123 01:05:42.811454 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:05:43.571721 containerd[1567]: time="2026-01-23T01:05:43.570614218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:05:43.709471 containerd[1567]: time="2026-01-23T01:05:43.704989200Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:43.720745 containerd[1567]: time="2026-01-23T01:05:43.720675825Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:05:43.721419 containerd[1567]: time="2026-01-23T01:05:43.721143633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:05:43.721663 kubelet[2904]: E0123 01:05:43.721543 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:05:43.721663 kubelet[2904]: E0123 01:05:43.721604 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:05:43.725095 containerd[1567]: time="2026-01-23T01:05:43.724757857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:05:43.729067 kubelet[2904]: E0123 01:05:43.727981 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2cws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cd579f464-d54m6_calico-apiserver(3c57c36f-e9c4-4469-830b-86d51909b784): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:43.740419 kubelet[2904]: E0123 01:05:43.733692 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:05:43.837113 containerd[1567]: time="2026-01-23T01:05:43.836355531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:43.852397 containerd[1567]: time="2026-01-23T01:05:43.851757582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:05:43.852397 containerd[1567]: time="2026-01-23T01:05:43.852029463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:05:43.854931 kubelet[2904]: E0123 01:05:43.853724 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:05:43.854931 kubelet[2904]: E0123 01:05:43.854113 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:05:43.854931 kubelet[2904]: E0123 01:05:43.854412 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7xws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cd579f464-47gkr_calico-apiserver(aa11cfa4-c767-44e1-bc2c-24c685ae9875): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:43.860691 kubelet[2904]: E0123 01:05:43.856930 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:05:44.612205 containerd[1567]: time="2026-01-23T01:05:44.607331897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:05:44.707341 containerd[1567]: time="2026-01-23T01:05:44.706614100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:44.724987 containerd[1567]: time="2026-01-23T01:05:44.722122425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:05:44.724987 containerd[1567]: time="2026-01-23T01:05:44.722135459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:05:44.727201 kubelet[2904]: E0123 01:05:44.723139 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:05:44.727201 kubelet[2904]: E0123 01:05:44.723206 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:05:44.727201 kubelet[2904]: E0123 01:05:44.723458 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqkwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:44.740426 containerd[1567]: time="2026-01-23T01:05:44.733607613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:05:44.842948 containerd[1567]: time="2026-01-23T01:05:44.842567671Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:44.855442 containerd[1567]: time="2026-01-23T01:05:44.855167640Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:05:44.855442 containerd[1567]: time="2026-01-23T01:05:44.855404435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:05:44.861039 kubelet[2904]: E0123 01:05:44.858163 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:05:44.861039 kubelet[2904]: E0123 01:05:44.859707 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:05:44.863068 kubelet[2904]: E0123 01:05:44.862213 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqkwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:44.866343 kubelet[2904]: E0123 01:05:44.864609 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:05:46.371755 kubelet[2904]: E0123 01:05:46.367047 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:05:46.571529 kubelet[2904]: E0123 01:05:46.571041 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:05:53.587156 kubelet[2904]: E0123 01:05:53.582218 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:05:53.612253 kubelet[2904]: E0123 01:05:53.611137 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:05:55.572128 kubelet[2904]: E0123 01:05:55.570699 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:05:56.567491 kubelet[2904]: E0123 01:05:56.566736 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:05:57.346103 containerd[1567]: time="2026-01-23T01:05:57.345513095Z" level=info msg="StopPodSandbox for \"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66\"" Jan 23 01:05:58.053564 containerd[1567]: 2026-01-23 01:05:57.643 [WARNING][5844] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" WorkloadEndpoint="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:58.053564 containerd[1567]: 2026-01-23 01:05:57.646 [INFO][5844] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Jan 23 01:05:58.053564 containerd[1567]: 2026-01-23 01:05:57.649 [INFO][5844] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" iface="eth0" netns="" Jan 23 01:05:58.053564 containerd[1567]: 2026-01-23 01:05:57.651 [INFO][5844] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Jan 23 01:05:58.053564 containerd[1567]: 2026-01-23 01:05:57.652 [INFO][5844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Jan 23 01:05:58.053564 containerd[1567]: 2026-01-23 01:05:57.878 [INFO][5854] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" HandleID="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Workload="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:58.053564 containerd[1567]: 2026-01-23 01:05:57.887 [INFO][5854] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:05:58.053564 containerd[1567]: 2026-01-23 01:05:57.895 [INFO][5854] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:05:58.053564 containerd[1567]: 2026-01-23 01:05:57.954 [WARNING][5854] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" HandleID="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Workload="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:58.053564 containerd[1567]: 2026-01-23 01:05:57.954 [INFO][5854] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" HandleID="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Workload="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:58.053564 containerd[1567]: 2026-01-23 01:05:57.968 [INFO][5854] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:05:58.053564 containerd[1567]: 2026-01-23 01:05:58.021 [INFO][5844] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Jan 23 01:05:58.055412 containerd[1567]: time="2026-01-23T01:05:58.055132809Z" level=info msg="TearDown network for sandbox \"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66\" successfully" Jan 23 01:05:58.055412 containerd[1567]: time="2026-01-23T01:05:58.055181102Z" level=info msg="StopPodSandbox for \"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66\" returns successfully" Jan 23 01:05:58.058641 containerd[1567]: time="2026-01-23T01:05:58.058493857Z" level=info msg="RemovePodSandbox for \"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66\"" Jan 23 01:05:58.075219 containerd[1567]: time="2026-01-23T01:05:58.074933973Z" level=info msg="Forcibly stopping sandbox \"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66\"" Jan 23 01:05:58.591370 kubelet[2904]: E0123 01:05:58.591292 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:05:58.724939 containerd[1567]: 2026-01-23 01:05:58.468 [WARNING][5873] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" WorkloadEndpoint="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:58.724939 containerd[1567]: 2026-01-23 01:05:58.472 [INFO][5873] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Jan 23 01:05:58.724939 containerd[1567]: 2026-01-23 01:05:58.478 [INFO][5873] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" iface="eth0" netns="" Jan 23 01:05:58.724939 containerd[1567]: 2026-01-23 01:05:58.478 [INFO][5873] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Jan 23 01:05:58.724939 containerd[1567]: 2026-01-23 01:05:58.478 [INFO][5873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Jan 23 01:05:58.724939 containerd[1567]: 2026-01-23 01:05:58.612 [INFO][5882] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" HandleID="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Workload="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:58.724939 containerd[1567]: 2026-01-23 01:05:58.616 [INFO][5882] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:05:58.724939 containerd[1567]: 2026-01-23 01:05:58.616 [INFO][5882] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:05:58.724939 containerd[1567]: 2026-01-23 01:05:58.675 [WARNING][5882] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" HandleID="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Workload="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:58.724939 containerd[1567]: 2026-01-23 01:05:58.675 [INFO][5882] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" HandleID="k8s-pod-network.2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Workload="localhost-k8s-whisker--6c554c8d6b--phpdd-eth0" Jan 23 01:05:58.724939 containerd[1567]: 2026-01-23 01:05:58.709 [INFO][5882] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:05:58.724939 containerd[1567]: 2026-01-23 01:05:58.715 [INFO][5873] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66" Jan 23 01:05:58.724939 containerd[1567]: time="2026-01-23T01:05:58.723934089Z" level=info msg="TearDown network for sandbox \"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66\" successfully" Jan 23 01:05:58.742752 containerd[1567]: time="2026-01-23T01:05:58.742710601Z" level=info msg="Ensure that sandbox 2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66 in task-service has been cleanup successfully" Jan 23 01:05:58.812247 containerd[1567]: time="2026-01-23T01:05:58.810553641Z" level=info msg="RemovePodSandbox \"2343550e9ce853448ad17fc085c45ac1087acf63918e12fdc3e1c1eb7d88cf66\" returns successfully" Jan 23 01:05:59.588427 kubelet[2904]: E0123 01:05:59.574945 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:06:01.573121 containerd[1567]: time="2026-01-23T01:06:01.572343268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:06:01.740547 containerd[1567]: time="2026-01-23T01:06:01.740098851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:01.756409 containerd[1567]: time="2026-01-23T01:06:01.756338477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:06:01.757034 containerd[1567]: time="2026-01-23T01:06:01.756547312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:06:01.757570 kubelet[2904]: E0123 01:06:01.757519 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:06:01.761028 kubelet[2904]: E0123 01:06:01.758545 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:06:01.761028 kubelet[2904]: E0123 01:06:01.759093 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkbmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5hvbp_calico-system(4d54e261-de28-4a61-bcdc-0ebb829e113e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:01.761674 kubelet[2904]: E0123 01:06:01.761631 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:06:04.578460 containerd[1567]: time="2026-01-23T01:06:04.577465705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:06:04.676210 containerd[1567]: time="2026-01-23T01:06:04.676050414Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:04.679509 containerd[1567]: time="2026-01-23T01:06:04.679165484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:06:04.680535 containerd[1567]: time="2026-01-23T01:06:04.679460936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:06:04.680665 kubelet[2904]: E0123 01:06:04.680592 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:06:04.681574 kubelet[2904]: E0123 01:06:04.680666 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:06:04.681574 kubelet[2904]: E0123 01:06:04.681016 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a705283100714243a961fb2d223d106b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rfm6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57c574bd64-f4j4m_calico-system(f3049eb1-9735-4370-a74a-2cab9800bc64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:04.693130 containerd[1567]: time="2026-01-23T01:06:04.691746653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:06:04.784157 containerd[1567]: time="2026-01-23T01:06:04.784083851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:04.790941 containerd[1567]: time="2026-01-23T01:06:04.789969729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:06:04.791068 containerd[1567]: time="2026-01-23T01:06:04.790996564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:06:04.792710 kubelet[2904]: E0123 01:06:04.792490 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:06:04.792710 kubelet[2904]: E0123 01:06:04.792654 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:06:04.793194 kubelet[2904]: E0123 01:06:04.793027 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfm6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57c574bd64-f4j4m_calico-system(f3049eb1-9735-4370-a74a-2cab9800bc64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:04.794932 kubelet[2904]: E0123 01:06:04.794649 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:06:05.507313 systemd[1]: Started sshd@7-10.0.0.18:22-10.0.0.1:36986.service - OpenSSH per-connection server daemon (10.0.0.1:36986). Jan 23 01:06:05.930610 sshd[5895]: Accepted publickey for core from 10.0.0.1 port 36986 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:06:05.943045 sshd-session[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:06.009582 systemd-logind[1547]: New session 8 of user core. Jan 23 01:06:06.053215 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:06:06.912042 sshd[5904]: Connection closed by 10.0.0.1 port 36986 Jan 23 01:06:06.917031 sshd-session[5895]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:06.923472 systemd-logind[1547]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:06:06.927659 systemd[1]: sshd@7-10.0.0.18:22-10.0.0.1:36986.service: Deactivated successfully. Jan 23 01:06:06.937117 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:06:06.951240 systemd-logind[1547]: Removed session 8. Jan 23 01:06:07.579160 containerd[1567]: time="2026-01-23T01:06:07.579015127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:06:07.692256 containerd[1567]: time="2026-01-23T01:06:07.691669459Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:07.701078 containerd[1567]: time="2026-01-23T01:06:07.700348796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:06:07.701078 containerd[1567]: time="2026-01-23T01:06:07.699961480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:06:07.702170 kubelet[2904]: E0123 01:06:07.702076 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:06:07.702170 kubelet[2904]: E0123 01:06:07.702142 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:06:07.704311 kubelet[2904]: E0123 01:06:07.702408 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7xws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cd579f464-47gkr_calico-apiserver(aa11cfa4-c767-44e1-bc2c-24c685ae9875): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:07.705234 kubelet[2904]: E0123 01:06:07.704942 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:06:07.709378 containerd[1567]: time="2026-01-23T01:06:07.707753491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:06:07.809156 containerd[1567]: time="2026-01-23T01:06:07.808931251Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:07.814958 containerd[1567]: time="2026-01-23T01:06:07.814171157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:06:07.814958 containerd[1567]: time="2026-01-23T01:06:07.814278794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:06:07.815143 kubelet[2904]: E0123 01:06:07.814481 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:06:07.815143 kubelet[2904]: E0123 01:06:07.814660 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:06:07.816953 kubelet[2904]: E0123 01:06:07.815388 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbdvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5dcc89fd94-gvlr2_calico-system(5ea72ad9-04e5-48e1-a1f3-bd44567b901e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:07.817356 kubelet[2904]: E0123 01:06:07.817325 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:06:08.585178 containerd[1567]: time="2026-01-23T01:06:08.583306606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:06:08.676394 containerd[1567]: time="2026-01-23T01:06:08.675928041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:08.680694 containerd[1567]: time="2026-01-23T01:06:08.680402164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:06:08.680694 containerd[1567]: time="2026-01-23T01:06:08.680494110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:06:08.683142 kubelet[2904]: E0123 01:06:08.681938 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:06:08.683142 kubelet[2904]: E0123 01:06:08.682013 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:06:08.683142 kubelet[2904]: E0123 01:06:08.682193 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2cws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cd579f464-d54m6_calico-apiserver(3c57c36f-e9c4-4469-830b-86d51909b784): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:08.686341 kubelet[2904]: E0123 01:06:08.686301 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:06:11.939227 systemd[1]: Started sshd@8-10.0.0.18:22-10.0.0.1:37000.service - OpenSSH per-connection server daemon (10.0.0.1:37000). Jan 23 01:06:12.060635 sshd[5927]: Accepted publickey for core from 10.0.0.1 port 37000 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:06:12.071172 sshd-session[5927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:12.099434 systemd-logind[1547]: New session 9 of user core. Jan 23 01:06:12.109656 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:06:12.511172 sshd[5930]: Connection closed by 10.0.0.1 port 37000 Jan 23 01:06:12.510196 sshd-session[5927]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:12.525386 systemd[1]: sshd@8-10.0.0.18:22-10.0.0.1:37000.service: Deactivated successfully. Jan 23 01:06:12.542403 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:06:12.549276 systemd-logind[1547]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:06:12.558500 systemd-logind[1547]: Removed session 9. Jan 23 01:06:13.570098 containerd[1567]: time="2026-01-23T01:06:13.568153995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:06:13.655068 containerd[1567]: time="2026-01-23T01:06:13.653926567Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:13.656643 containerd[1567]: time="2026-01-23T01:06:13.656527401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:06:13.656720 containerd[1567]: time="2026-01-23T01:06:13.656694062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:06:13.657367 kubelet[2904]: E0123 01:06:13.657245 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:06:13.658139 kubelet[2904]: E0123 01:06:13.657377 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:06:13.658139 kubelet[2904]: E0123 01:06:13.657548 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqkwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:13.661941 containerd[1567]: time="2026-01-23T01:06:13.661333479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:06:13.726246 containerd[1567]: time="2026-01-23T01:06:13.725569021Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:13.730419 containerd[1567]: time="2026-01-23T01:06:13.730161947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:06:13.730419 containerd[1567]: time="2026-01-23T01:06:13.730302146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:06:13.731134 kubelet[2904]: E0123 01:06:13.731031 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:06:13.731205 kubelet[2904]: E0123 01:06:13.731159 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:06:13.731758 kubelet[2904]: E0123 01:06:13.731549 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqkwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:13.734098 kubelet[2904]: E0123 01:06:13.733702 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:06:15.673553 kubelet[2904]: E0123 01:06:15.673393 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:06:18.027428 systemd[1]: Started sshd@9-10.0.0.18:22-10.0.0.1:50014.service - OpenSSH per-connection server daemon (10.0.0.1:50014). Jan 23 01:06:18.822733 kubelet[2904]: E0123 01:06:18.814318 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:06:19.218548 sshd[5963]: Accepted publickey for core from 10.0.0.1 port 50014 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:06:19.216078 sshd-session[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:19.236408 systemd-logind[1547]: New session 10 of user core. Jan 23 01:06:19.247087 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:06:19.640496 kubelet[2904]: E0123 01:06:19.640160 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:06:19.643089 kubelet[2904]: E0123 01:06:19.643034 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:06:19.914683 sshd[5974]: Connection closed by 10.0.0.1 port 50014 Jan 23 01:06:19.914669 sshd-session[5963]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:19.922444 systemd[1]: sshd@9-10.0.0.18:22-10.0.0.1:50014.service: Deactivated successfully. Jan 23 01:06:19.930965 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:06:19.937016 systemd-logind[1547]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:06:19.941641 systemd-logind[1547]: Removed session 10. Jan 23 01:06:21.654625 kubelet[2904]: E0123 01:06:21.653331 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:06:25.022486 systemd[1]: Started sshd@10-10.0.0.18:22-10.0.0.1:35616.service - OpenSSH per-connection server daemon (10.0.0.1:35616). Jan 23 01:06:25.477457 sshd[5990]: Accepted publickey for core from 10.0.0.1 port 35616 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:06:25.482443 sshd-session[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:25.552406 systemd-logind[1547]: New session 11 of user core. Jan 23 01:06:25.561400 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:06:26.271923 sshd[5993]: Connection closed by 10.0.0.1 port 35616 Jan 23 01:06:26.274484 sshd-session[5990]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:26.355315 systemd[1]: sshd@10-10.0.0.18:22-10.0.0.1:35616.service: Deactivated successfully. Jan 23 01:06:26.364170 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:06:26.368437 systemd-logind[1547]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:06:26.426532 systemd-logind[1547]: Removed session 11. Jan 23 01:06:26.565030 kubelet[2904]: E0123 01:06:26.564025 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:06:27.572480 kubelet[2904]: E0123 01:06:27.572193 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:06:27.578989 kubelet[2904]: E0123 01:06:27.578651 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:06:31.307253 systemd[1]: Started sshd@11-10.0.0.18:22-10.0.0.1:35622.service - OpenSSH per-connection server daemon (10.0.0.1:35622). Jan 23 01:06:31.430447 sshd[6010]: Accepted publickey for core from 10.0.0.1 port 35622 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:06:31.433341 sshd-session[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:31.452524 systemd-logind[1547]: New session 12 of user core. Jan 23 01:06:31.461099 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:06:31.573515 kubelet[2904]: E0123 01:06:31.571506 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:06:31.586384 kubelet[2904]: E0123 01:06:31.575247 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:06:31.842648 sshd[6013]: Connection closed by 10.0.0.1 port 35622 Jan 23 01:06:31.844251 sshd-session[6010]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:31.853464 systemd-logind[1547]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:06:31.854537 systemd[1]: sshd@11-10.0.0.18:22-10.0.0.1:35622.service: Deactivated successfully. Jan 23 01:06:31.859419 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:06:31.865445 systemd-logind[1547]: Removed session 12. Jan 23 01:06:32.575257 kubelet[2904]: E0123 01:06:32.575123 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:06:36.563858 kubelet[2904]: E0123 01:06:36.563495 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:06:36.912645 systemd[1]: Started sshd@12-10.0.0.18:22-10.0.0.1:33874.service - OpenSSH per-connection server daemon (10.0.0.1:33874). Jan 23 01:06:37.066075 sshd[6028]: Accepted publickey for core from 10.0.0.1 port 33874 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:06:37.071466 sshd-session[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:37.112213 systemd-logind[1547]: New session 13 of user core. Jan 23 01:06:37.121102 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:06:37.633178 sshd[6031]: Connection closed by 10.0.0.1 port 33874 Jan 23 01:06:37.635959 sshd-session[6028]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:37.652487 systemd[1]: sshd@12-10.0.0.18:22-10.0.0.1:33874.service: Deactivated successfully. Jan 23 01:06:37.658542 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:06:37.663585 systemd-logind[1547]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:06:37.669528 systemd-logind[1547]: Removed session 13. Jan 23 01:06:39.564435 kubelet[2904]: E0123 01:06:39.562203 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:06:41.577575 kubelet[2904]: E0123 01:06:41.576123 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:06:41.584497 kubelet[2904]: E0123 01:06:41.584419 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:06:42.575044 kubelet[2904]: E0123 01:06:42.572093 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:06:42.677189 systemd[1]: Started sshd@13-10.0.0.18:22-10.0.0.1:47084.service - OpenSSH per-connection server daemon (10.0.0.1:47084). Jan 23 01:06:42.900401 sshd[6047]: Accepted publickey for core from 10.0.0.1 port 47084 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:06:42.896460 sshd-session[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:42.913926 systemd-logind[1547]: New session 14 of user core. Jan 23 01:06:42.930012 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:06:43.564089 sshd[6050]: Connection closed by 10.0.0.1 port 47084 Jan 23 01:06:43.568611 sshd-session[6047]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:43.591329 kubelet[2904]: E0123 01:06:43.590588 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:06:43.601525 systemd-logind[1547]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:06:43.602403 systemd[1]: sshd@13-10.0.0.18:22-10.0.0.1:47084.service: Deactivated successfully. Jan 23 01:06:43.615130 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:06:43.640130 systemd-logind[1547]: Removed session 14. Jan 23 01:06:44.572045 kubelet[2904]: E0123 01:06:44.571342 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:06:45.564593 kubelet[2904]: E0123 01:06:45.564329 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:06:48.629379 systemd[1]: Started sshd@14-10.0.0.18:22-10.0.0.1:47086.service - OpenSSH per-connection server daemon (10.0.0.1:47086). Jan 23 01:06:49.121700 sshd[6095]: Accepted publickey for core from 10.0.0.1 port 47086 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:06:49.138320 sshd-session[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:49.155600 systemd-logind[1547]: New session 15 of user core. Jan 23 01:06:49.167044 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:06:49.706990 sshd[6100]: Connection closed by 10.0.0.1 port 47086 Jan 23 01:06:49.707736 sshd-session[6095]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:49.761285 systemd[1]: sshd@14-10.0.0.18:22-10.0.0.1:47086.service: Deactivated successfully. Jan 23 01:06:49.772214 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:06:49.791358 systemd-logind[1547]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:06:49.798414 systemd-logind[1547]: Removed session 15. Jan 23 01:06:51.574320 containerd[1567]: time="2026-01-23T01:06:51.574058498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:06:51.685173 containerd[1567]: time="2026-01-23T01:06:51.684747718Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:51.705582 containerd[1567]: time="2026-01-23T01:06:51.700388388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:06:51.705582 containerd[1567]: time="2026-01-23T01:06:51.700694035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:06:51.708129 kubelet[2904]: E0123 01:06:51.707970 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:06:51.711067 kubelet[2904]: E0123 01:06:51.708130 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:06:51.711067 kubelet[2904]: E0123 01:06:51.708512 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7xws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cd579f464-47gkr_calico-apiserver(aa11cfa4-c767-44e1-bc2c-24c685ae9875): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:51.711067 kubelet[2904]: E0123 01:06:51.710968 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:06:52.563665 kubelet[2904]: E0123 01:06:52.563471 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:06:54.569091 containerd[1567]: time="2026-01-23T01:06:54.568697286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:06:54.694060 containerd[1567]: time="2026-01-23T01:06:54.692195304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:54.720970 containerd[1567]: time="2026-01-23T01:06:54.718653658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:06:54.720970 containerd[1567]: time="2026-01-23T01:06:54.718754752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:06:54.721178 kubelet[2904]: E0123 01:06:54.720034 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:06:54.721178 kubelet[2904]: E0123 01:06:54.720100 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:06:54.721178 kubelet[2904]: E0123 01:06:54.720605 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2cws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cd579f464-d54m6_calico-apiserver(3c57c36f-e9c4-4469-830b-86d51909b784): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:54.729177 kubelet[2904]: E0123 01:06:54.721757 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:06:54.764260 systemd[1]: Started sshd@15-10.0.0.18:22-10.0.0.1:53232.service - OpenSSH per-connection server daemon (10.0.0.1:53232). Jan 23 01:06:54.946725 sshd[6114]: Accepted publickey for core from 10.0.0.1 port 53232 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:06:54.955235 sshd-session[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:54.977983 systemd-logind[1547]: New session 16 of user core. Jan 23 01:06:55.007353 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:06:55.585733 containerd[1567]: time="2026-01-23T01:06:55.585233081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:06:55.595688 kubelet[2904]: E0123 01:06:55.585301 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:06:55.637215 sshd[6117]: Connection closed by 10.0.0.1 port 53232 Jan 23 01:06:55.643281 sshd-session[6114]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:55.662084 systemd[1]: Started sshd@16-10.0.0.18:22-10.0.0.1:53238.service - OpenSSH per-connection server daemon (10.0.0.1:53238). Jan 23 01:06:55.674721 systemd[1]: sshd@15-10.0.0.18:22-10.0.0.1:53232.service: Deactivated successfully. Jan 23 01:06:55.687389 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:06:55.714064 containerd[1567]: time="2026-01-23T01:06:55.713612220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:55.714702 systemd-logind[1547]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:06:55.729965 containerd[1567]: time="2026-01-23T01:06:55.727020301Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:06:55.729965 containerd[1567]: time="2026-01-23T01:06:55.727156404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:06:55.735450 kubelet[2904]: E0123 01:06:55.735014 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:06:55.735450 kubelet[2904]: E0123 01:06:55.735158 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:06:55.735450 kubelet[2904]: E0123 01:06:55.735408 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqkwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:55.741731 systemd-logind[1547]: Removed session 16. Jan 23 01:06:55.752937 containerd[1567]: time="2026-01-23T01:06:55.752104666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:06:55.881086 containerd[1567]: time="2026-01-23T01:06:55.880589689Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:55.895321 containerd[1567]: time="2026-01-23T01:06:55.895152330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:06:55.895321 containerd[1567]: time="2026-01-23T01:06:55.895277160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:06:55.895739 sshd[6129]: Accepted publickey for core from 10.0.0.1 port 53238 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:06:55.898466 kubelet[2904]: E0123 01:06:55.898270 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:06:55.898466 kubelet[2904]: E0123 01:06:55.898353 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:06:55.900258 sshd-session[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:55.901106 kubelet[2904]: E0123 01:06:55.900958 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a705283100714243a961fb2d223d106b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rfm6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57c574bd64-f4j4m_calico-system(f3049eb1-9735-4370-a74a-2cab9800bc64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:55.904962 containerd[1567]: time="2026-01-23T01:06:55.904665383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:06:55.912470 systemd-logind[1547]: New session 17 of user core. Jan 23 01:06:55.929296 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:06:56.034674 containerd[1567]: time="2026-01-23T01:06:56.034290838Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:56.044670 containerd[1567]: time="2026-01-23T01:06:56.044432450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:06:56.045053 containerd[1567]: time="2026-01-23T01:06:56.044673235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:06:56.048942 kubelet[2904]: E0123 01:06:56.045357 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:06:56.048942 kubelet[2904]: E0123 01:06:56.045429 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:06:56.048942 kubelet[2904]: E0123 01:06:56.046169 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbdvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5dcc89fd94-gvlr2_calico-system(5ea72ad9-04e5-48e1-a1f3-bd44567b901e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:56.048942 kubelet[2904]: E0123 01:06:56.047280 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:06:56.056429 containerd[1567]: time="2026-01-23T01:06:56.056115256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:06:56.164669 containerd[1567]: time="2026-01-23T01:06:56.164300042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:56.171753 containerd[1567]: time="2026-01-23T01:06:56.168735196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:06:56.171753 containerd[1567]: time="2026-01-23T01:06:56.168964260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:06:56.171753 containerd[1567]: time="2026-01-23T01:06:56.171679706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:06:56.172103 kubelet[2904]: E0123 01:06:56.169272 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:06:56.172103 kubelet[2904]: E0123 01:06:56.169338 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:06:56.172103 kubelet[2904]: E0123 01:06:56.170723 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqkwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:56.174240 kubelet[2904]: E0123 01:06:56.172976 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:06:56.284317 containerd[1567]: time="2026-01-23T01:06:56.284163806Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:56.314206 containerd[1567]: time="2026-01-23T01:06:56.313230709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:06:56.314206 containerd[1567]: time="2026-01-23T01:06:56.314024330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:06:56.316907 kubelet[2904]: E0123 01:06:56.316360 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:06:56.316907 kubelet[2904]: E0123 01:06:56.316428 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:06:56.316907 kubelet[2904]: E0123 01:06:56.316634 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfm6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57c574bd64-f4j4m_calico-system(f3049eb1-9735-4370-a74a-2cab9800bc64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:56.319933 kubelet[2904]: E0123 01:06:56.319130 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:06:56.563919 kubelet[2904]: E0123 01:06:56.562646 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:06:56.578951 containerd[1567]: time="2026-01-23T01:06:56.578275807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:06:56.616519 sshd[6135]: Connection closed by 10.0.0.1 port 53238 Jan 23 01:06:56.616311 sshd-session[6129]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:56.643374 systemd[1]: Started sshd@17-10.0.0.18:22-10.0.0.1:53252.service - OpenSSH per-connection server daemon (10.0.0.1:53252). Jan 23 01:06:56.645160 systemd[1]: sshd@16-10.0.0.18:22-10.0.0.1:53238.service: Deactivated successfully. Jan 23 01:06:56.658070 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:06:56.672707 systemd-logind[1547]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:06:56.685993 containerd[1567]: time="2026-01-23T01:06:56.685476477Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:56.700156 systemd-logind[1547]: Removed session 17. Jan 23 01:06:56.704993 containerd[1567]: time="2026-01-23T01:06:56.703262054Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:06:56.704993 containerd[1567]: time="2026-01-23T01:06:56.703396644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:06:56.705640 kubelet[2904]: E0123 01:06:56.705501 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:06:56.707036 kubelet[2904]: E0123 01:06:56.706993 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:06:56.726087 kubelet[2904]: E0123 01:06:56.709373 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkbmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5hvbp_calico-system(4d54e261-de28-4a61-bcdc-0ebb829e113e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:56.729159 kubelet[2904]: E0123 01:06:56.728687 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:06:56.855417 sshd[6144]: Accepted publickey for core from 10.0.0.1 port 53252 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:06:56.867037 sshd-session[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:56.903703 systemd-logind[1547]: New session 18 of user core. Jan 23 01:06:56.935480 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:06:57.484035 sshd[6150]: Connection closed by 10.0.0.1 port 53252 Jan 23 01:06:57.491548 sshd-session[6144]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:57.529125 systemd[1]: sshd@17-10.0.0.18:22-10.0.0.1:53252.service: Deactivated successfully. Jan 23 01:06:57.545297 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:06:57.552950 systemd-logind[1547]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:06:57.568020 kubelet[2904]: E0123 01:06:57.565054 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:06:57.573138 systemd-logind[1547]: Removed session 18. Jan 23 01:07:02.523469 systemd[1]: Started sshd@18-10.0.0.18:22-10.0.0.1:57988.service - OpenSSH per-connection server daemon (10.0.0.1:57988). Jan 23 01:07:02.736683 sshd[6186]: Accepted publickey for core from 10.0.0.1 port 57988 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:07:02.745688 sshd-session[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:02.763137 systemd-logind[1547]: New session 19 of user core. Jan 23 01:07:02.784494 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:07:04.424702 kubelet[2904]: E0123 01:07:04.413435 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:07:04.571411 sshd[6189]: Connection closed by 10.0.0.1 port 57988 Jan 23 01:07:04.573381 sshd-session[6186]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:04.601403 systemd[1]: sshd@18-10.0.0.18:22-10.0.0.1:57988.service: Deactivated successfully. Jan 23 01:07:04.613573 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:07:04.633182 systemd-logind[1547]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:07:04.642260 systemd-logind[1547]: Removed session 19. Jan 23 01:07:06.575485 kubelet[2904]: E0123 01:07:06.569747 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:07:07.596428 kubelet[2904]: E0123 01:07:07.595682 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:07:08.582980 kubelet[2904]: E0123 01:07:08.578563 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:07:08.582980 kubelet[2904]: E0123 01:07:08.578713 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:07:09.647322 kubelet[2904]: E0123 01:07:09.647258 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:07:09.661324 systemd[1]: Started sshd@19-10.0.0.18:22-10.0.0.1:58004.service - OpenSSH per-connection server daemon (10.0.0.1:58004). Jan 23 01:07:09.942248 sshd[6204]: Accepted publickey for core from 10.0.0.1 port 58004 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:07:09.958757 sshd-session[6204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:10.002197 systemd-logind[1547]: New session 20 of user core. Jan 23 01:07:10.022611 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:07:10.605177 kubelet[2904]: E0123 01:07:10.602401 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:07:10.684688 sshd[6207]: Connection closed by 10.0.0.1 port 58004 Jan 23 01:07:10.686124 sshd-session[6204]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:10.726620 systemd[1]: sshd@19-10.0.0.18:22-10.0.0.1:58004.service: Deactivated successfully. Jan 23 01:07:10.727264 systemd-logind[1547]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:07:10.739713 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:07:10.747367 systemd-logind[1547]: Removed session 20. Jan 23 01:07:15.731150 systemd[1]: Started sshd@20-10.0.0.18:22-10.0.0.1:59902.service - OpenSSH per-connection server daemon (10.0.0.1:59902). Jan 23 01:07:15.964189 sshd[6220]: Accepted publickey for core from 10.0.0.1 port 59902 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:07:15.966285 sshd-session[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:15.989179 systemd-logind[1547]: New session 21 of user core. Jan 23 01:07:16.009332 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:07:16.570125 kubelet[2904]: E0123 01:07:16.569405 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:07:16.740187 sshd[6224]: Connection closed by 10.0.0.1 port 59902 Jan 23 01:07:16.743600 sshd-session[6220]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:16.783269 systemd-logind[1547]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:07:16.786400 systemd[1]: sshd@20-10.0.0.18:22-10.0.0.1:59902.service: Deactivated successfully. Jan 23 01:07:16.794610 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:07:16.821266 systemd-logind[1547]: Removed session 21. Jan 23 01:07:19.596419 kubelet[2904]: E0123 01:07:19.595614 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:07:19.640635 kubelet[2904]: E0123 01:07:19.640366 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:07:21.308991 containerd[1567]: time="2026-01-23T01:07:21.297081337Z" level=warning msg="container event discarded" container=5cb66d823f37deaa9c9698cb19aeb57448cff62f3b1c24cf91cfc755e8cf47e3 type=CONTAINER_CREATED_EVENT Jan 23 01:07:21.334363 containerd[1567]: time="2026-01-23T01:07:21.334012918Z" level=warning msg="container event discarded" container=5cb66d823f37deaa9c9698cb19aeb57448cff62f3b1c24cf91cfc755e8cf47e3 type=CONTAINER_STARTED_EVENT Jan 23 01:07:21.435949 containerd[1567]: time="2026-01-23T01:07:21.434580229Z" level=warning msg="container event discarded" container=30da438abf3dc33a4fd7453526e15258c1a94f8ff7121750e84b72ec1471c1c5 type=CONTAINER_CREATED_EVENT Jan 23 01:07:21.435949 containerd[1567]: time="2026-01-23T01:07:21.434645135Z" level=warning msg="container event discarded" container=30da438abf3dc33a4fd7453526e15258c1a94f8ff7121750e84b72ec1471c1c5 type=CONTAINER_STARTED_EVENT Jan 23 01:07:21.466745 containerd[1567]: time="2026-01-23T01:07:21.466663961Z" level=warning msg="container event discarded" container=716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1 type=CONTAINER_CREATED_EVENT Jan 23 01:07:21.479120 containerd[1567]: time="2026-01-23T01:07:21.479046968Z" level=warning msg="container event discarded" container=08539a6df246c4af8486383c3c70a801ebcd709f61b512b76c9e955f1866ca5f type=CONTAINER_CREATED_EVENT Jan 23 01:07:21.485614 containerd[1567]: time="2026-01-23T01:07:21.485339506Z" level=warning msg="container event discarded" container=08539a6df246c4af8486383c3c70a801ebcd709f61b512b76c9e955f1866ca5f type=CONTAINER_STARTED_EVENT Jan 23 01:07:21.557749 containerd[1567]: time="2026-01-23T01:07:21.557661098Z" level=warning msg="container event discarded" container=9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398 type=CONTAINER_CREATED_EVENT Jan 23 01:07:21.583327 kubelet[2904]: E0123 01:07:21.582672 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:07:21.590403 containerd[1567]: time="2026-01-23T01:07:21.590137681Z" level=warning msg="container event discarded" container=a8352e5e50011dcb90427ef0827e66225c9cd3d0c7b719166121dae8605de346 type=CONTAINER_CREATED_EVENT Jan 23 01:07:21.787311 systemd[1]: Started sshd@21-10.0.0.18:22-10.0.0.1:59912.service - OpenSSH per-connection server daemon (10.0.0.1:59912). Jan 23 01:07:21.822693 containerd[1567]: time="2026-01-23T01:07:21.822542078Z" level=warning msg="container event discarded" container=716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1 type=CONTAINER_STARTED_EVENT Jan 23 01:07:21.897620 containerd[1567]: time="2026-01-23T01:07:21.897394176Z" level=warning msg="container event discarded" container=9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398 type=CONTAINER_STARTED_EVENT Jan 23 01:07:21.973024 containerd[1567]: time="2026-01-23T01:07:21.972734810Z" level=warning msg="container event discarded" container=a8352e5e50011dcb90427ef0827e66225c9cd3d0c7b719166121dae8605de346 type=CONTAINER_STARTED_EVENT Jan 23 01:07:22.069001 sshd[6263]: Accepted publickey for core from 10.0.0.1 port 59912 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:07:22.080561 sshd-session[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:22.143017 systemd-logind[1547]: New session 22 of user core. Jan 23 01:07:22.159514 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:07:22.761080 sshd[6266]: Connection closed by 10.0.0.1 port 59912 Jan 23 01:07:22.763611 sshd-session[6263]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:22.814946 systemd[1]: sshd@21-10.0.0.18:22-10.0.0.1:59912.service: Deactivated successfully. Jan 23 01:07:22.815972 systemd-logind[1547]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:07:22.826211 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:07:22.834610 systemd-logind[1547]: Removed session 22. Jan 23 01:07:23.610553 kubelet[2904]: E0123 01:07:23.608453 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:07:23.616970 kubelet[2904]: E0123 01:07:23.615223 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:07:27.802426 systemd[1]: Started sshd@22-10.0.0.18:22-10.0.0.1:48286.service - OpenSSH per-connection server daemon (10.0.0.1:48286). Jan 23 01:07:28.014403 sshd[6281]: Accepted publickey for core from 10.0.0.1 port 48286 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:07:28.019420 sshd-session[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:28.058125 systemd-logind[1547]: New session 23 of user core. Jan 23 01:07:28.075497 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:07:28.751233 sshd[6284]: Connection closed by 10.0.0.1 port 48286 Jan 23 01:07:28.754297 sshd-session[6281]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:28.767174 systemd[1]: sshd@22-10.0.0.18:22-10.0.0.1:48286.service: Deactivated successfully. Jan 23 01:07:28.781192 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:07:28.786429 systemd-logind[1547]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:07:28.794273 systemd-logind[1547]: Removed session 23. Jan 23 01:07:29.576007 kubelet[2904]: E0123 01:07:29.574167 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:07:31.570659 kubelet[2904]: E0123 01:07:31.570594 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:07:33.572667 kubelet[2904]: E0123 01:07:33.572324 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:07:33.575465 kubelet[2904]: E0123 01:07:33.575246 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:07:33.792986 systemd[1]: Started sshd@23-10.0.0.18:22-10.0.0.1:44314.service - OpenSSH per-connection server daemon (10.0.0.1:44314). Jan 23 01:07:34.018236 sshd[6298]: Accepted publickey for core from 10.0.0.1 port 44314 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:07:34.029202 sshd-session[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:34.065668 systemd-logind[1547]: New session 24 of user core. Jan 23 01:07:34.088439 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 01:07:34.910933 sshd[6301]: Connection closed by 10.0.0.1 port 44314 Jan 23 01:07:34.889293 sshd-session[6298]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:34.926232 systemd[1]: sshd@23-10.0.0.18:22-10.0.0.1:44314.service: Deactivated successfully. Jan 23 01:07:34.935291 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 01:07:34.951753 systemd-logind[1547]: Session 24 logged out. Waiting for processes to exit. Jan 23 01:07:34.968235 systemd-logind[1547]: Removed session 24. Jan 23 01:07:36.589549 kubelet[2904]: E0123 01:07:36.585758 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:07:37.571306 kubelet[2904]: E0123 01:07:37.570643 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:07:39.935554 systemd[1]: Started sshd@24-10.0.0.18:22-10.0.0.1:44320.service - OpenSSH per-connection server daemon (10.0.0.1:44320). Jan 23 01:07:40.162747 sshd[6318]: Accepted publickey for core from 10.0.0.1 port 44320 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:07:40.170423 sshd-session[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:40.210135 systemd-logind[1547]: New session 25 of user core. Jan 23 01:07:40.233120 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 01:07:40.880448 sshd[6321]: Connection closed by 10.0.0.1 port 44320 Jan 23 01:07:40.883271 sshd-session[6318]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:40.909601 systemd[1]: sshd@24-10.0.0.18:22-10.0.0.1:44320.service: Deactivated successfully. Jan 23 01:07:40.916085 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 01:07:40.921414 systemd-logind[1547]: Session 25 logged out. Waiting for processes to exit. Jan 23 01:07:40.941022 systemd-logind[1547]: Removed session 25. Jan 23 01:07:43.570923 kubelet[2904]: E0123 01:07:43.570528 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:07:44.576420 kubelet[2904]: E0123 01:07:44.575070 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:07:45.578458 kubelet[2904]: E0123 01:07:45.573464 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:07:45.595302 kubelet[2904]: E0123 01:07:45.595257 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:07:45.934537 systemd[1]: Started sshd@25-10.0.0.18:22-10.0.0.1:34850.service - OpenSSH per-connection server daemon (10.0.0.1:34850). Jan 23 01:07:46.219706 sshd[6335]: Accepted publickey for core from 10.0.0.1 port 34850 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:07:46.229603 sshd-session[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:46.250002 systemd-logind[1547]: New session 26 of user core. Jan 23 01:07:46.269265 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 01:07:46.986018 sshd[6356]: Connection closed by 10.0.0.1 port 34850 Jan 23 01:07:46.986345 sshd-session[6335]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:47.012404 systemd[1]: sshd@25-10.0.0.18:22-10.0.0.1:34850.service: Deactivated successfully. Jan 23 01:07:47.023990 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 01:07:47.036345 systemd-logind[1547]: Session 26 logged out. Waiting for processes to exit. Jan 23 01:07:47.040594 systemd-logind[1547]: Removed session 26. Jan 23 01:07:49.573128 kubelet[2904]: E0123 01:07:49.572383 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:07:50.578179 kubelet[2904]: E0123 01:07:50.575480 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:07:52.026111 systemd[1]: Started sshd@26-10.0.0.18:22-10.0.0.1:34852.service - OpenSSH per-connection server daemon (10.0.0.1:34852). Jan 23 01:07:52.157150 sshd[6379]: Accepted publickey for core from 10.0.0.1 port 34852 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:07:52.163980 sshd-session[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:52.215447 systemd-logind[1547]: New session 27 of user core. Jan 23 01:07:52.233014 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 01:07:52.926926 sshd[6382]: Connection closed by 10.0.0.1 port 34852 Jan 23 01:07:52.927441 sshd-session[6379]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:52.957410 systemd-logind[1547]: Session 27 logged out. Waiting for processes to exit. Jan 23 01:07:52.961234 systemd[1]: sshd@26-10.0.0.18:22-10.0.0.1:34852.service: Deactivated successfully. Jan 23 01:07:52.973255 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 01:07:52.981738 systemd-logind[1547]: Removed session 27. Jan 23 01:07:53.562511 kubelet[2904]: E0123 01:07:53.562364 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:07:53.566556 kubelet[2904]: E0123 01:07:53.563488 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:07:56.576332 kubelet[2904]: E0123 01:07:56.576281 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:07:56.581046 kubelet[2904]: E0123 01:07:56.580731 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:07:57.573594 kubelet[2904]: E0123 01:07:57.573263 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:07:57.957494 systemd[1]: Started sshd@27-10.0.0.18:22-10.0.0.1:38062.service - OpenSSH per-connection server daemon (10.0.0.1:38062). Jan 23 01:07:58.119647 sshd[6402]: Accepted publickey for core from 10.0.0.1 port 38062 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:07:58.122678 sshd-session[6402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:58.149692 systemd-logind[1547]: New session 28 of user core. Jan 23 01:07:58.165199 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 01:07:58.580228 kubelet[2904]: E0123 01:07:58.580021 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:07:58.605711 sshd[6405]: Connection closed by 10.0.0.1 port 38062 Jan 23 01:07:58.606532 sshd-session[6402]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:58.620124 systemd[1]: sshd@27-10.0.0.18:22-10.0.0.1:38062.service: Deactivated successfully. Jan 23 01:07:58.633380 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 01:07:58.642347 systemd-logind[1547]: Session 28 logged out. Waiting for processes to exit. Jan 23 01:07:58.649363 systemd-logind[1547]: Removed session 28. Jan 23 01:08:03.578494 kubelet[2904]: E0123 01:08:03.573688 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:03.609046 kubelet[2904]: E0123 01:08:03.583624 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:08:03.648116 systemd[1]: Started sshd@28-10.0.0.18:22-10.0.0.1:45962.service - OpenSSH per-connection server daemon (10.0.0.1:45962). Jan 23 01:08:03.937129 sshd[6418]: Accepted publickey for core from 10.0.0.1 port 45962 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:08:03.945241 sshd-session[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:08:03.978336 systemd-logind[1547]: New session 29 of user core. Jan 23 01:08:04.051639 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 01:08:04.647076 sshd[6421]: Connection closed by 10.0.0.1 port 45962 Jan 23 01:08:04.648416 sshd-session[6418]: pam_unix(sshd:session): session closed for user core Jan 23 01:08:04.667376 systemd[1]: sshd@28-10.0.0.18:22-10.0.0.1:45962.service: Deactivated successfully. Jan 23 01:08:04.677061 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 01:08:04.709372 systemd-logind[1547]: Session 29 logged out. Waiting for processes to exit. Jan 23 01:08:04.717259 systemd-logind[1547]: Removed session 29. Jan 23 01:08:04.771632 containerd[1567]: time="2026-01-23T01:08:04.771531782Z" level=warning msg="container event discarded" container=9723266ea16a75393f63a8fd2d8df4ce0fe93801580a68762865a6da4dce2d1c type=CONTAINER_CREATED_EVENT Jan 23 01:08:04.772612 containerd[1567]: time="2026-01-23T01:08:04.772583683Z" level=warning msg="container event discarded" container=9723266ea16a75393f63a8fd2d8df4ce0fe93801580a68762865a6da4dce2d1c type=CONTAINER_STARTED_EVENT Jan 23 01:08:05.146752 containerd[1567]: time="2026-01-23T01:08:05.146649228Z" level=warning msg="container event discarded" container=b466006d9c10a4c1bbc97d2122fde76e93a8bd11b923d30351690a4cb713f98a type=CONTAINER_CREATED_EVENT Jan 23 01:08:05.569169 kubelet[2904]: E0123 01:08:05.568569 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:08:05.866160 containerd[1567]: time="2026-01-23T01:08:05.865177820Z" level=warning msg="container event discarded" container=b466006d9c10a4c1bbc97d2122fde76e93a8bd11b923d30351690a4cb713f98a type=CONTAINER_STARTED_EVENT Jan 23 01:08:06.506749 containerd[1567]: time="2026-01-23T01:08:06.505184829Z" level=warning msg="container event discarded" container=0638e95e8d7dbbccc52f5f8a124bb807acfad8890e8a77fd67784d1b8a593b80 type=CONTAINER_CREATED_EVENT Jan 23 01:08:06.506749 containerd[1567]: time="2026-01-23T01:08:06.505360407Z" level=warning msg="container event discarded" container=0638e95e8d7dbbccc52f5f8a124bb807acfad8890e8a77fd67784d1b8a593b80 type=CONTAINER_STARTED_EVENT Jan 23 01:08:06.562201 kubelet[2904]: E0123 01:08:06.562064 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:09.578029 kubelet[2904]: E0123 01:08:09.577273 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:08:09.578029 kubelet[2904]: E0123 01:08:09.577355 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:08:09.690120 systemd[1]: Started sshd@29-10.0.0.18:22-10.0.0.1:45972.service - OpenSSH per-connection server daemon (10.0.0.1:45972). Jan 23 01:08:09.865230 sshd[6445]: Accepted publickey for core from 10.0.0.1 port 45972 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:08:09.869353 sshd-session[6445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:08:09.885420 systemd-logind[1547]: New session 30 of user core. Jan 23 01:08:09.893261 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 23 01:08:10.295725 sshd[6448]: Connection closed by 10.0.0.1 port 45972 Jan 23 01:08:10.299221 sshd-session[6445]: pam_unix(sshd:session): session closed for user core Jan 23 01:08:10.310519 systemd[1]: sshd@29-10.0.0.18:22-10.0.0.1:45972.service: Deactivated successfully. Jan 23 01:08:10.319422 systemd[1]: session-30.scope: Deactivated successfully. Jan 23 01:08:10.324153 systemd-logind[1547]: Session 30 logged out. Waiting for processes to exit. Jan 23 01:08:10.330117 systemd-logind[1547]: Removed session 30. Jan 23 01:08:11.569066 kubelet[2904]: E0123 01:08:11.568157 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:08:11.579162 kubelet[2904]: E0123 01:08:11.576736 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:08:14.566179 kubelet[2904]: E0123 01:08:14.566013 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:15.393158 systemd[1]: Started sshd@30-10.0.0.18:22-10.0.0.1:40586.service - OpenSSH per-connection server daemon (10.0.0.1:40586). Jan 23 01:08:15.530921 sshd[6462]: Accepted publickey for core from 10.0.0.1 port 40586 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:08:15.534158 sshd-session[6462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:08:15.549910 systemd-logind[1547]: New session 31 of user core. Jan 23 01:08:15.554111 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 23 01:08:15.566036 kubelet[2904]: E0123 01:08:15.565955 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:15.867347 sshd[6465]: Connection closed by 10.0.0.1 port 40586 Jan 23 01:08:15.870552 sshd-session[6462]: pam_unix(sshd:session): session closed for user core Jan 23 01:08:15.890142 systemd[1]: sshd@30-10.0.0.18:22-10.0.0.1:40586.service: Deactivated successfully. Jan 23 01:08:15.900042 systemd[1]: session-31.scope: Deactivated successfully. Jan 23 01:08:15.903613 systemd-logind[1547]: Session 31 logged out. Waiting for processes to exit. Jan 23 01:08:15.912612 systemd-logind[1547]: Removed session 31. Jan 23 01:08:16.564725 kubelet[2904]: E0123 01:08:16.564076 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:18.568878 containerd[1567]: time="2026-01-23T01:08:18.568628095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:08:18.654960 containerd[1567]: time="2026-01-23T01:08:18.654733963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:18.658400 containerd[1567]: time="2026-01-23T01:08:18.658248719Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:08:18.658650 containerd[1567]: time="2026-01-23T01:08:18.658475027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:18.659316 kubelet[2904]: E0123 01:08:18.659266 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:08:18.660914 kubelet[2904]: E0123 01:08:18.659953 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:08:18.660914 kubelet[2904]: E0123 01:08:18.660344 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkbmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5hvbp_calico-system(4d54e261-de28-4a61-bcdc-0ebb829e113e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:18.661243 containerd[1567]: time="2026-01-23T01:08:18.660706312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:08:18.662492 kubelet[2904]: E0123 01:08:18.661740 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:08:18.742514 containerd[1567]: time="2026-01-23T01:08:18.742453701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:18.747865 containerd[1567]: time="2026-01-23T01:08:18.747326694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:08:18.747865 containerd[1567]: time="2026-01-23T01:08:18.747478540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:08:18.748470 kubelet[2904]: E0123 01:08:18.747744 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:08:18.750702 kubelet[2904]: E0123 01:08:18.749586 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:08:18.750702 kubelet[2904]: E0123 01:08:18.749879 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqkwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:18.754279 containerd[1567]: time="2026-01-23T01:08:18.754073514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:08:18.879971 containerd[1567]: time="2026-01-23T01:08:18.879405994Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:18.885944 containerd[1567]: time="2026-01-23T01:08:18.884738290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:08:18.885944 containerd[1567]: time="2026-01-23T01:08:18.885069913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:08:18.886105 kubelet[2904]: E0123 01:08:18.885415 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:08:18.886105 kubelet[2904]: E0123 01:08:18.885481 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:08:18.886105 kubelet[2904]: E0123 01:08:18.885653 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqkwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pk4tl_calico-system(77cee7a3-d314-42b2-8d1b-22ce21da8d56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:18.888382 kubelet[2904]: E0123 01:08:18.888049 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:08:20.564883 containerd[1567]: time="2026-01-23T01:08:20.564062807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:08:20.663270 containerd[1567]: time="2026-01-23T01:08:20.662906254Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:20.669564 containerd[1567]: time="2026-01-23T01:08:20.669505296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:08:20.670390 containerd[1567]: time="2026-01-23T01:08:20.669678214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:08:20.671364 kubelet[2904]: E0123 01:08:20.671178 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:08:20.672009 kubelet[2904]: E0123 01:08:20.671428 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:08:20.672009 kubelet[2904]: E0123 01:08:20.671619 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbdvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5dcc89fd94-gvlr2_calico-system(5ea72ad9-04e5-48e1-a1f3-bd44567b901e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:20.673470 kubelet[2904]: E0123 01:08:20.673145 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:08:20.904205 systemd[1]: Started sshd@31-10.0.0.18:22-10.0.0.1:40602.service - OpenSSH per-connection server daemon (10.0.0.1:40602). Jan 23 01:08:21.015846 sshd[6507]: Accepted publickey for core from 10.0.0.1 port 40602 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:08:21.019230 sshd-session[6507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:08:21.035288 systemd-logind[1547]: New session 32 of user core. Jan 23 01:08:21.040630 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 23 01:08:21.070730 containerd[1567]: time="2026-01-23T01:08:21.070605209Z" level=warning msg="container event discarded" container=230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56 type=CONTAINER_CREATED_EVENT Jan 23 01:08:21.324366 sshd[6510]: Connection closed by 10.0.0.1 port 40602 Jan 23 01:08:21.325536 sshd-session[6507]: pam_unix(sshd:session): session closed for user core Jan 23 01:08:21.344188 systemd[1]: sshd@31-10.0.0.18:22-10.0.0.1:40602.service: Deactivated successfully. Jan 23 01:08:21.348147 systemd[1]: session-32.scope: Deactivated successfully. Jan 23 01:08:21.361368 systemd-logind[1547]: Session 32 logged out. Waiting for processes to exit. Jan 23 01:08:21.369113 systemd[1]: Started sshd@32-10.0.0.18:22-10.0.0.1:40618.service - OpenSSH per-connection server daemon (10.0.0.1:40618). Jan 23 01:08:21.376300 systemd-logind[1547]: Removed session 32. Jan 23 01:08:21.526406 sshd[6524]: Accepted publickey for core from 10.0.0.1 port 40618 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:08:21.531728 sshd-session[6524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:08:21.548613 systemd-logind[1547]: New session 33 of user core. Jan 23 01:08:21.560630 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 23 01:08:22.478567 sshd[6527]: Connection closed by 10.0.0.1 port 40618 Jan 23 01:08:22.479496 sshd-session[6524]: pam_unix(sshd:session): session closed for user core Jan 23 01:08:22.513385 systemd[1]: sshd@32-10.0.0.18:22-10.0.0.1:40618.service: Deactivated successfully. Jan 23 01:08:22.518262 containerd[1567]: time="2026-01-23T01:08:22.517395154Z" level=warning msg="container event discarded" container=230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56 type=CONTAINER_STARTED_EVENT Jan 23 01:08:22.519978 systemd[1]: session-33.scope: Deactivated successfully. Jan 23 01:08:22.530156 systemd-logind[1547]: Session 33 logged out. Waiting for processes to exit. Jan 23 01:08:22.540410 systemd[1]: Started sshd@33-10.0.0.18:22-10.0.0.1:37688.service - OpenSSH per-connection server daemon (10.0.0.1:37688). Jan 23 01:08:22.551886 systemd-logind[1547]: Removed session 33. Jan 23 01:08:22.680035 sshd[6539]: Accepted publickey for core from 10.0.0.1 port 37688 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:08:22.702508 sshd-session[6539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:08:22.725574 systemd-logind[1547]: New session 34 of user core. Jan 23 01:08:22.741143 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 23 01:08:23.583586 containerd[1567]: time="2026-01-23T01:08:23.580938490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:08:23.752460 containerd[1567]: time="2026-01-23T01:08:23.752399662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:23.754586 containerd[1567]: time="2026-01-23T01:08:23.754531178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:08:23.754679 containerd[1567]: time="2026-01-23T01:08:23.754617957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:23.755088 kubelet[2904]: E0123 01:08:23.755030 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:23.755709 kubelet[2904]: E0123 01:08:23.755100 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:23.758134 kubelet[2904]: E0123 01:08:23.758023 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2cws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cd579f464-d54m6_calico-apiserver(3c57c36f-e9c4-4469-830b-86d51909b784): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:23.759467 kubelet[2904]: E0123 01:08:23.759361 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:08:24.465336 sshd[6542]: Connection closed by 10.0.0.1 port 37688 Jan 23 01:08:24.469203 sshd-session[6539]: pam_unix(sshd:session): session closed for user core Jan 23 01:08:24.512576 systemd[1]: sshd@33-10.0.0.18:22-10.0.0.1:37688.service: Deactivated successfully. Jan 23 01:08:24.521732 systemd[1]: session-34.scope: Deactivated successfully. Jan 23 01:08:24.525552 systemd-logind[1547]: Session 34 logged out. Waiting for processes to exit. Jan 23 01:08:24.538981 systemd[1]: Started sshd@34-10.0.0.18:22-10.0.0.1:37696.service - OpenSSH per-connection server daemon (10.0.0.1:37696). Jan 23 01:08:24.541199 systemd-logind[1547]: Removed session 34. Jan 23 01:08:24.575991 containerd[1567]: time="2026-01-23T01:08:24.575749742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:08:24.662634 containerd[1567]: time="2026-01-23T01:08:24.662472133Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:24.669529 containerd[1567]: time="2026-01-23T01:08:24.669397499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:08:24.670244 containerd[1567]: time="2026-01-23T01:08:24.669993234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:08:24.673508 kubelet[2904]: E0123 01:08:24.672104 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:08:24.673508 kubelet[2904]: E0123 01:08:24.672175 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:08:24.673508 kubelet[2904]: E0123 01:08:24.672335 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a705283100714243a961fb2d223d106b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rfm6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57c574bd64-f4j4m_calico-system(f3049eb1-9735-4370-a74a-2cab9800bc64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:24.689651 containerd[1567]: time="2026-01-23T01:08:24.689612113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:08:24.778833 sshd[6564]: Accepted publickey for core from 10.0.0.1 port 37696 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:08:24.781120 containerd[1567]: time="2026-01-23T01:08:24.780075611Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:24.784562 sshd-session[6564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:08:24.804982 containerd[1567]: time="2026-01-23T01:08:24.802636271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:08:24.804982 containerd[1567]: time="2026-01-23T01:08:24.802717628Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:08:24.805114 kubelet[2904]: E0123 01:08:24.804209 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:08:24.805114 kubelet[2904]: E0123 01:08:24.804271 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:08:24.805114 kubelet[2904]: E0123 01:08:24.804424 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfm6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57c574bd64-f4j4m_calico-system(f3049eb1-9735-4370-a74a-2cab9800bc64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:24.807488 kubelet[2904]: E0123 01:08:24.807030 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:08:24.827033 systemd-logind[1547]: New session 35 of user core. Jan 23 01:08:24.838425 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 23 01:08:25.577751 containerd[1567]: time="2026-01-23T01:08:25.577625159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:08:25.687449 containerd[1567]: time="2026-01-23T01:08:25.685615523Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:25.705194 containerd[1567]: time="2026-01-23T01:08:25.704625274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:08:25.705194 containerd[1567]: time="2026-01-23T01:08:25.705144892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:25.706922 kubelet[2904]: E0123 01:08:25.706587 2904 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:25.706922 kubelet[2904]: E0123 01:08:25.706666 2904 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:25.707417 kubelet[2904]: E0123 01:08:25.707352 2904 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7xws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cd579f464-47gkr_calico-apiserver(aa11cfa4-c767-44e1-bc2c-24c685ae9875): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:25.709074 kubelet[2904]: E0123 01:08:25.708996 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:08:25.908634 sshd[6567]: Connection closed by 10.0.0.1 port 37696 Jan 23 01:08:25.915193 sshd-session[6564]: pam_unix(sshd:session): session closed for user core Jan 23 01:08:25.929242 systemd[1]: Started sshd@35-10.0.0.18:22-10.0.0.1:37704.service - OpenSSH per-connection server daemon (10.0.0.1:37704). Jan 23 01:08:25.930674 systemd[1]: sshd@34-10.0.0.18:22-10.0.0.1:37696.service: Deactivated successfully. Jan 23 01:08:25.940627 systemd[1]: session-35.scope: Deactivated successfully. Jan 23 01:08:25.946445 systemd-logind[1547]: Session 35 logged out. Waiting for processes to exit. Jan 23 01:08:25.956032 systemd-logind[1547]: Removed session 35. Jan 23 01:08:26.024193 sshd[6576]: Accepted publickey for core from 10.0.0.1 port 37704 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:08:26.028911 sshd-session[6576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:08:26.047957 systemd-logind[1547]: New session 36 of user core. Jan 23 01:08:26.056553 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 23 01:08:26.418612 sshd[6582]: Connection closed by 10.0.0.1 port 37704 Jan 23 01:08:26.420387 sshd-session[6576]: pam_unix(sshd:session): session closed for user core Jan 23 01:08:26.433213 systemd-logind[1547]: Session 36 logged out. Waiting for processes to exit. Jan 23 01:08:26.434542 systemd[1]: sshd@35-10.0.0.18:22-10.0.0.1:37704.service: Deactivated successfully. Jan 23 01:08:26.442671 systemd[1]: session-36.scope: Deactivated successfully. Jan 23 01:08:26.447062 systemd-logind[1547]: Removed session 36. Jan 23 01:08:29.566225 kubelet[2904]: E0123 01:08:29.565746 2904 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:31.467245 systemd[1]: Started sshd@36-10.0.0.18:22-10.0.0.1:37716.service - OpenSSH per-connection server daemon (10.0.0.1:37716). Jan 23 01:08:31.580410 kubelet[2904]: E0123 01:08:31.580240 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:08:31.633111 sshd[6597]: Accepted publickey for core from 10.0.0.1 port 37716 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:08:31.637063 sshd-session[6597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:08:31.651437 systemd-logind[1547]: New session 37 of user core. Jan 23 01:08:31.660335 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 23 01:08:31.865230 sshd[6600]: Connection closed by 10.0.0.1 port 37716 Jan 23 01:08:31.865996 sshd-session[6597]: pam_unix(sshd:session): session closed for user core Jan 23 01:08:31.876176 systemd-logind[1547]: Session 37 logged out. Waiting for processes to exit. Jan 23 01:08:31.877243 systemd[1]: sshd@36-10.0.0.18:22-10.0.0.1:37716.service: Deactivated successfully. Jan 23 01:08:31.881180 systemd[1]: session-37.scope: Deactivated successfully. Jan 23 01:08:31.890282 systemd-logind[1547]: Removed session 37. Jan 23 01:08:33.575067 kubelet[2904]: E0123 01:08:33.574984 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:08:33.578675 kubelet[2904]: E0123 01:08:33.578332 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:08:36.567897 kubelet[2904]: E0123 01:08:36.566906 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:08:36.922747 systemd[1]: Started sshd@37-10.0.0.18:22-10.0.0.1:34420.service - OpenSSH per-connection server daemon (10.0.0.1:34420). Jan 23 01:08:37.042877 sshd[6626]: Accepted publickey for core from 10.0.0.1 port 34420 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:08:37.053733 sshd-session[6626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:08:37.082531 systemd-logind[1547]: New session 38 of user core. Jan 23 01:08:37.085268 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 23 01:08:37.445651 sshd[6633]: Connection closed by 10.0.0.1 port 34420 Jan 23 01:08:37.447124 sshd-session[6626]: pam_unix(sshd:session): session closed for user core Jan 23 01:08:37.458984 systemd-logind[1547]: Session 38 logged out. Waiting for processes to exit. Jan 23 01:08:37.460188 systemd[1]: sshd@37-10.0.0.18:22-10.0.0.1:34420.service: Deactivated successfully. Jan 23 01:08:37.466666 systemd[1]: session-38.scope: Deactivated successfully. Jan 23 01:08:37.473893 systemd-logind[1547]: Removed session 38. Jan 23 01:08:37.572281 kubelet[2904]: E0123 01:08:37.570543 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784" Jan 23 01:08:38.583098 kubelet[2904]: E0123 01:08:38.582878 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:08:42.511328 systemd[1]: Started sshd@38-10.0.0.18:22-10.0.0.1:36770.service - OpenSSH per-connection server daemon (10.0.0.1:36770). Jan 23 01:08:42.658110 sshd[6655]: Accepted publickey for core from 10.0.0.1 port 36770 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:08:42.664572 sshd-session[6655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:08:42.697082 systemd-logind[1547]: New session 39 of user core. Jan 23 01:08:42.726094 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 23 01:08:43.121401 sshd[6659]: Connection closed by 10.0.0.1 port 36770 Jan 23 01:08:43.124585 sshd-session[6655]: pam_unix(sshd:session): session closed for user core Jan 23 01:08:43.140361 systemd[1]: sshd@38-10.0.0.18:22-10.0.0.1:36770.service: Deactivated successfully. Jan 23 01:08:43.157698 systemd[1]: session-39.scope: Deactivated successfully. Jan 23 01:08:43.161885 systemd-logind[1547]: Session 39 logged out. Waiting for processes to exit. Jan 23 01:08:43.168916 systemd-logind[1547]: Removed session 39. Jan 23 01:08:43.567873 kubelet[2904]: E0123 01:08:43.567059 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5hvbp" podUID="4d54e261-de28-4a61-bcdc-0ebb829e113e" Jan 23 01:08:46.573326 kubelet[2904]: E0123 01:08:46.573270 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc89fd94-gvlr2" podUID="5ea72ad9-04e5-48e1-a1f3-bd44567b901e" Jan 23 01:08:46.579254 kubelet[2904]: E0123 01:08:46.578101 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pk4tl" podUID="77cee7a3-d314-42b2-8d1b-22ce21da8d56" Jan 23 01:08:47.966006 containerd[1567]: time="2026-01-23T01:08:47.965704660Z" level=warning msg="container event discarded" container=230fe583711a4f8c97e989e2ea244a747218ff0e024270b402ea7fa2bf1eed56 type=CONTAINER_STOPPED_EVENT Jan 23 01:08:48.128005 containerd[1567]: time="2026-01-23T01:08:48.127617694Z" level=warning msg="container event discarded" container=9d5db59dc60e9a5edd390e30ba247371a434c0f0d0b23ae0b2c40b694c6c0398 type=CONTAINER_STOPPED_EVENT Jan 23 01:08:48.156758 containerd[1567]: time="2026-01-23T01:08:48.156530975Z" level=warning msg="container event discarded" container=716b1c984fc91250425933d68dc456812436a34eccde144fe2e1546cb32977e1 type=CONTAINER_STOPPED_EVENT Jan 23 01:08:48.156651 systemd[1]: Started sshd@39-10.0.0.18:22-10.0.0.1:36772.service - OpenSSH per-connection server daemon (10.0.0.1:36772). Jan 23 01:08:48.264382 sshd[6699]: Accepted publickey for core from 10.0.0.1 port 36772 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:08:48.269424 sshd-session[6699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:08:48.284189 systemd-logind[1547]: New session 40 of user core. Jan 23 01:08:48.289408 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 23 01:08:48.611054 sshd[6702]: Connection closed by 10.0.0.1 port 36772 Jan 23 01:08:48.612492 sshd-session[6699]: pam_unix(sshd:session): session closed for user core Jan 23 01:08:48.621644 systemd[1]: sshd@39-10.0.0.18:22-10.0.0.1:36772.service: Deactivated successfully. Jan 23 01:08:48.627238 systemd[1]: session-40.scope: Deactivated successfully. Jan 23 01:08:48.631293 systemd-logind[1547]: Session 40 logged out. Waiting for processes to exit. Jan 23 01:08:48.635734 systemd-logind[1547]: Removed session 40. Jan 23 01:08:49.217096 containerd[1567]: time="2026-01-23T01:08:49.217014166Z" level=warning msg="container event discarded" container=2b27375bf613e1a1f20f8525070b4f234b1a6428470ffc4922cbb050692c69c4 type=CONTAINER_CREATED_EVENT Jan 23 01:08:49.370342 containerd[1567]: time="2026-01-23T01:08:49.370088304Z" level=warning msg="container event discarded" container=93f166fed0d74b1dc975186b30f1030661fb778681600a75d60e713bcb2bcc92 type=CONTAINER_CREATED_EVENT Jan 23 01:08:49.370342 containerd[1567]: time="2026-01-23T01:08:49.370268074Z" level=warning msg="container event discarded" container=00ecf8a5373306f8b1d7009e451bfea59a1bc12435483b090bda6249c6fb4b23 type=CONTAINER_CREATED_EVENT Jan 23 01:08:49.578128 kubelet[2904]: E0123 01:08:49.577263 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57c574bd64-f4j4m" podUID="f3049eb1-9735-4370-a74a-2cab9800bc64" Jan 23 01:08:50.117167 containerd[1567]: time="2026-01-23T01:08:50.116664547Z" level=warning msg="container event discarded" container=2b27375bf613e1a1f20f8525070b4f234b1a6428470ffc4922cbb050692c69c4 type=CONTAINER_STARTED_EVENT Jan 23 01:08:50.217137 containerd[1567]: time="2026-01-23T01:08:50.217059615Z" level=warning msg="container event discarded" container=93f166fed0d74b1dc975186b30f1030661fb778681600a75d60e713bcb2bcc92 type=CONTAINER_STARTED_EVENT Jan 23 01:08:50.514007 containerd[1567]: time="2026-01-23T01:08:50.512135330Z" level=warning msg="container event discarded" container=00ecf8a5373306f8b1d7009e451bfea59a1bc12435483b090bda6249c6fb4b23 type=CONTAINER_STARTED_EVENT Jan 23 01:08:50.566298 kubelet[2904]: E0123 01:08:50.566238 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-47gkr" podUID="aa11cfa4-c767-44e1-bc2c-24c685ae9875" Jan 23 01:08:50.579368 kubelet[2904]: E0123 01:08:50.579107 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cd579f464-d54m6" podUID="3c57c36f-e9c4-4469-830b-86d51909b784"