Jan 23 19:26:12.790533 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 19:26:12.790568 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:26:12.790585 kernel: BIOS-provided physical RAM map: Jan 23 19:26:12.790593 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 23 19:26:12.790600 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 23 19:26:12.790608 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 19:26:12.790618 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 23 19:26:12.790629 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 23 19:26:12.790669 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 19:26:12.790679 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 19:26:12.790687 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 19:26:12.790702 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 19:26:12.790712 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 19:26:12.790721 kernel: NX (Execute Disable) protection: active Jan 23 19:26:12.790730 kernel: APIC: Static calls initialized Jan 23 19:26:12.790739 kernel: SMBIOS 2.8 present. Jan 23 19:26:12.790751 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 23 19:26:12.790800 kernel: DMI: Memory slots populated: 1/1 Jan 23 19:26:12.790940 kernel: Hypervisor detected: KVM Jan 23 19:26:12.790954 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 19:26:12.790965 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 19:26:12.790977 kernel: kvm-clock: using sched offset of 24525417544 cycles Jan 23 19:26:12.790988 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 19:26:12.790997 kernel: tsc: Detected 2445.426 MHz processor Jan 23 19:26:12.791005 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 19:26:12.791014 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 19:26:12.791028 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 19:26:12.791037 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 19:26:12.791048 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 19:26:12.791060 kernel: Using GB pages for direct mapping Jan 23 19:26:12.791069 kernel: ACPI: Early table checksum verification disabled Jan 23 19:26:12.791077 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 23 19:26:12.791086 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:26:12.791095 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:26:12.791103 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:26:12.791117 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 23 19:26:12.791128 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:26:12.791137 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:26:12.791146 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:26:12.791192 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:26:12.791211 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 23 19:26:12.791224 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 23 19:26:12.791292 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 23 19:26:12.791301 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 23 19:26:12.791310 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 23 19:26:12.791319 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 23 19:26:12.791328 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 23 19:26:12.791338 kernel: No NUMA configuration found Jan 23 19:26:12.791350 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 23 19:26:12.791365 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 23 19:26:12.791415 kernel: Zone ranges: Jan 23 19:26:12.791458 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 19:26:12.791468 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 23 19:26:12.791526 kernel: Normal empty Jan 23 19:26:12.791538 kernel: Device empty Jan 23 19:26:12.791581 kernel: Movable zone start for each node Jan 23 19:26:12.791592 kernel: Early memory node ranges Jan 23 19:26:12.791634 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 19:26:12.791679 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 23 19:26:12.791689 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 23 19:26:12.791735 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 19:26:12.791745 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 19:26:12.791783 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 23 19:26:12.791795 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 19:26:12.791833 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 19:26:12.791844 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 19:26:12.791853 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 19:26:12.791894 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 19:26:12.791905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 19:26:12.791914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 19:26:12.791923 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 19:26:12.791932 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 19:26:12.791944 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 19:26:12.792104 kernel: TSC deadline timer available Jan 23 19:26:12.792117 kernel: CPU topo: Max. logical packages: 1 Jan 23 19:26:12.792126 kernel: CPU topo: Max. logical dies: 1 Jan 23 19:26:12.792135 kernel: CPU topo: Max. dies per package: 1 Jan 23 19:26:12.792149 kernel: CPU topo: Max. threads per core: 1 Jan 23 19:26:12.792196 kernel: CPU topo: Num. cores per package: 4 Jan 23 19:26:12.792206 kernel: CPU topo: Num. threads per package: 4 Jan 23 19:26:12.792215 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 19:26:12.792224 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 19:26:12.792288 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 19:26:12.792298 kernel: kvm-guest: setup PV sched yield Jan 23 19:26:12.792307 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 19:26:12.792317 kernel: Booting paravirtualized kernel on KVM Jan 23 19:26:12.792335 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 19:26:12.792344 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 19:26:12.792353 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 19:26:12.792362 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 19:26:12.792371 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 19:26:12.792379 kernel: kvm-guest: PV spinlocks enabled Jan 23 19:26:12.792390 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 19:26:12.792403 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:26:12.792417 kernel: random: crng init done Jan 23 19:26:12.792426 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 19:26:12.792435 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 19:26:12.792444 kernel: Fallback order for Node 0: 0 Jan 23 19:26:12.792454 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 23 19:26:12.792466 kernel: Policy zone: DMA32 Jan 23 19:26:12.792475 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 19:26:12.792525 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 19:26:12.792537 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 19:26:12.792552 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 19:26:12.792561 kernel: Dynamic Preempt: voluntary Jan 23 19:26:12.792570 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 19:26:12.792580 kernel: rcu: RCU event tracing is enabled. Jan 23 19:26:12.792590 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 19:26:12.792599 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 19:26:12.792612 kernel: Rude variant of Tasks RCU enabled. Jan 23 19:26:12.792621 kernel: Tracing variant of Tasks RCU enabled. Jan 23 19:26:12.792630 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 19:26:12.792643 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 19:26:12.792652 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 19:26:12.792662 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 19:26:12.792675 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 19:26:12.792685 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 19:26:12.792694 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 19:26:12.792714 kernel: Console: colour VGA+ 80x25 Jan 23 19:26:12.792727 kernel: printk: legacy console [ttyS0] enabled Jan 23 19:26:12.792741 kernel: ACPI: Core revision 20240827 Jan 23 19:26:12.792751 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 19:26:12.792760 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 19:26:12.792769 kernel: x2apic enabled Jan 23 19:26:12.792782 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 19:26:12.792938 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 19:26:12.792953 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 19:26:12.792966 kernel: kvm-guest: setup PV IPIs Jan 23 19:26:12.792976 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 19:26:12.792991 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 19:26:12.793000 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 23 19:26:12.793010 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 19:26:12.793020 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 19:26:12.793030 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 19:26:12.793040 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 19:26:12.793051 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 19:26:12.793061 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 19:26:12.793075 kernel: Speculative Store Bypass: Vulnerable Jan 23 19:26:12.793087 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 19:26:12.793100 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 19:26:12.793111 kernel: active return thunk: srso_alias_return_thunk Jan 23 19:26:12.793122 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 19:26:12.793135 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 19:26:12.793148 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 19:26:12.793158 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 19:26:12.793168 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 19:26:12.793182 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 19:26:12.793191 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 19:26:12.793201 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 19:26:12.793213 kernel: Freeing SMP alternatives memory: 32K Jan 23 19:26:12.793282 kernel: pid_max: default: 32768 minimum: 301 Jan 23 19:26:12.793296 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 19:26:12.793308 kernel: landlock: Up and running. Jan 23 19:26:12.793319 kernel: SELinux: Initializing. Jan 23 19:26:12.793330 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 19:26:12.793346 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 19:26:12.793358 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 19:26:12.793369 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 19:26:12.793380 kernel: signal: max sigframe size: 1776 Jan 23 19:26:12.793391 kernel: rcu: Hierarchical SRCU implementation. Jan 23 19:26:12.793403 kernel: rcu: Max phase no-delay instances is 400. Jan 23 19:26:12.793414 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 19:26:12.793425 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 19:26:12.793436 kernel: smp: Bringing up secondary CPUs ... Jan 23 19:26:12.793450 kernel: smpboot: x86: Booting SMP configuration: Jan 23 19:26:12.793461 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 19:26:12.793472 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 19:26:12.793528 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 23 19:26:12.793540 kernel: Memory: 2420716K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145096K reserved, 0K cma-reserved) Jan 23 19:26:12.793549 kernel: devtmpfs: initialized Jan 23 19:26:12.793559 kernel: x86/mm: Memory block size: 128MB Jan 23 19:26:12.793569 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 19:26:12.793582 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 19:26:12.793597 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 19:26:12.793607 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 19:26:12.793616 kernel: audit: initializing netlink subsys (disabled) Jan 23 19:26:12.793627 kernel: audit: type=2000 audit(1769196361.606:1): state=initialized audit_enabled=0 res=1 Jan 23 19:26:12.793640 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 19:26:12.793649 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 19:26:12.793659 kernel: cpuidle: using governor menu Jan 23 19:26:12.793668 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 19:26:12.793679 kernel: dca service started, version 1.12.1 Jan 23 19:26:12.793697 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 19:26:12.793707 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 19:26:12.793717 kernel: PCI: Using configuration type 1 for base access Jan 23 19:26:12.793727 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 19:26:12.793737 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 19:26:12.793751 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 19:26:12.793761 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 19:26:12.793770 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 19:26:12.793784 kernel: ACPI: Added _OSI(Module Device) Jan 23 19:26:12.793796 kernel: ACPI: Added _OSI(Processor Device) Jan 23 19:26:12.793934 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 19:26:12.793947 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 19:26:12.793957 kernel: ACPI: Interpreter enabled Jan 23 19:26:12.793967 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 19:26:12.793978 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 19:26:12.793989 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 19:26:12.794000 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 19:26:12.794012 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 19:26:12.794030 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 19:26:12.794448 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 19:26:12.794674 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 19:26:12.794979 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 19:26:12.794996 kernel: PCI host bridge to bus 0000:00 Jan 23 19:26:12.795173 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 19:26:12.795660 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 19:26:12.795951 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 19:26:12.796105 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 23 19:26:12.796323 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 19:26:12.796522 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 23 19:26:12.796721 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 19:26:12.797064 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 19:26:12.797317 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 19:26:12.797540 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 19:26:12.797708 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 19:26:12.798018 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 19:26:12.798198 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 19:26:12.798533 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 19:26:12.798708 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 23 19:26:12.799012 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 19:26:12.799183 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 19:26:12.799591 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 19:26:12.799766 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 23 19:26:12.800067 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 19:26:12.800313 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 19:26:12.800546 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 19:26:12.800758 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 23 19:26:12.801107 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 23 19:26:12.801337 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 23 19:26:12.801563 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 19:26:12.801805 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 19:26:12.802029 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 19:26:12.802222 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 19:26:12.802469 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 23 19:26:12.802985 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 23 19:26:12.803182 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 19:26:12.803408 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 19:26:12.803427 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 19:26:12.803440 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 19:26:12.803456 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 19:26:12.803468 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 19:26:12.803533 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 19:26:12.803544 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 19:26:12.803554 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 19:26:12.803563 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 19:26:12.803572 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 19:26:12.803583 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 19:26:12.803595 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 19:26:12.803612 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 19:26:12.803624 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 19:26:12.803636 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 19:26:12.803647 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 19:26:12.803659 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 19:26:12.803670 kernel: iommu: Default domain type: Translated Jan 23 19:26:12.803682 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 19:26:12.803693 kernel: PCI: Using ACPI for IRQ routing Jan 23 19:26:12.803705 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 19:26:12.803721 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 23 19:26:12.803732 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 23 19:26:12.804053 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 19:26:12.804329 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 19:26:12.804550 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 19:26:12.804568 kernel: vgaarb: loaded Jan 23 19:26:12.804581 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 19:26:12.804593 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 19:26:12.804609 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 19:26:12.804621 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 19:26:12.804633 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 19:26:12.804645 kernel: pnp: PnP ACPI init Jan 23 19:26:12.805062 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 19:26:12.805082 kernel: pnp: PnP ACPI: found 6 devices Jan 23 19:26:12.805093 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 19:26:12.805105 kernel: NET: Registered PF_INET protocol family Jan 23 19:26:12.805118 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 19:26:12.805133 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 19:26:12.805143 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 19:26:12.805153 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 19:26:12.805162 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 19:26:12.805174 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 19:26:12.805187 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 19:26:12.805198 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 19:26:12.805208 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 19:26:12.805222 kernel: NET: Registered PF_XDP protocol family Jan 23 19:26:12.805543 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 19:26:12.805759 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 19:26:12.806070 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 19:26:12.806282 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 23 19:26:12.806442 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 19:26:12.806640 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 23 19:26:12.806659 kernel: PCI: CLS 0 bytes, default 64 Jan 23 19:26:12.806671 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 19:26:12.806688 kernel: Initialise system trusted keyrings Jan 23 19:26:12.806700 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 19:26:12.806711 kernel: Key type asymmetric registered Jan 23 19:26:12.806723 kernel: Asymmetric key parser 'x509' registered Jan 23 19:26:12.806736 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 19:26:12.806748 kernel: io scheduler mq-deadline registered Jan 23 19:26:12.806758 kernel: io scheduler kyber registered Jan 23 19:26:12.806768 kernel: io scheduler bfq registered Jan 23 19:26:12.806778 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 19:26:12.806793 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 19:26:12.806804 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 19:26:12.806898 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 19:26:12.806910 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 19:26:12.806922 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 19:26:12.806933 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 19:26:12.806945 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 19:26:12.806957 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 19:26:12.807177 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 19:26:12.807203 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 19:26:12.807425 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 19:26:12.807645 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T19:26:11 UTC (1769196371) Jan 23 19:26:12.807813 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 19:26:12.807833 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 19:26:12.807844 kernel: NET: Registered PF_INET6 protocol family Jan 23 19:26:12.807854 kernel: Segment Routing with IPv6 Jan 23 19:26:12.807865 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 19:26:12.807883 kernel: NET: Registered PF_PACKET protocol family Jan 23 19:26:12.807895 kernel: Key type dns_resolver registered Jan 23 19:26:12.807906 kernel: IPI shorthand broadcast: enabled Jan 23 19:26:12.807918 kernel: sched_clock: Marking stable (7671023778, 2242015443)->(11039018000, -1125978779) Jan 23 19:26:12.807929 kernel: registered taskstats version 1 Jan 23 19:26:12.807941 kernel: Loading compiled-in X.509 certificates Jan 23 19:26:12.807953 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 19:26:12.807964 kernel: Demotion targets for Node 0: null Jan 23 19:26:12.807976 kernel: Key type .fscrypt registered Jan 23 19:26:12.807991 kernel: Key type fscrypt-provisioning registered Jan 23 19:26:12.808003 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 19:26:12.808014 kernel: ima: Allocated hash algorithm: sha1 Jan 23 19:26:12.808025 kernel: ima: No architecture policies found Jan 23 19:26:12.808036 kernel: clk: Disabling unused clocks Jan 23 19:26:12.808046 kernel: Warning: unable to open an initial console. Jan 23 19:26:12.808057 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 19:26:12.808071 kernel: Write protecting the kernel read-only data: 40960k Jan 23 19:26:12.808085 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 19:26:12.808095 kernel: Run /init as init process Jan 23 19:26:12.808105 kernel: with arguments: Jan 23 19:26:12.808115 kernel: /init Jan 23 19:26:12.808125 kernel: with environment: Jan 23 19:26:12.808137 kernel: HOME=/ Jan 23 19:26:12.808147 kernel: TERM=linux Jan 23 19:26:12.808160 systemd[1]: Successfully made /usr/ read-only. Jan 23 19:26:12.808175 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 19:26:12.808274 systemd[1]: Detected virtualization kvm. Jan 23 19:26:12.808290 systemd[1]: Detected architecture x86-64. Jan 23 19:26:12.808301 systemd[1]: Running in initrd. Jan 23 19:26:12.808313 systemd[1]: No hostname configured, using default hostname. Jan 23 19:26:12.808327 systemd[1]: Hostname set to . Jan 23 19:26:12.808339 systemd[1]: Initializing machine ID from VM UUID. Jan 23 19:26:12.808351 systemd[1]: Queued start job for default target initrd.target. Jan 23 19:26:12.808369 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:26:12.808397 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:26:12.808412 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 19:26:12.808423 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 19:26:12.808435 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 19:26:12.808450 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 19:26:12.808463 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 19:26:12.808476 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 19:26:12.808532 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:26:12.808544 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:26:12.808555 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:26:12.808566 systemd[1]: Reached target slices.target - Slice Units. Jan 23 19:26:12.808577 systemd[1]: Reached target swap.target - Swaps. Jan 23 19:26:12.808597 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:26:12.808608 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 19:26:12.808619 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 19:26:12.808632 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 19:26:12.808647 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 19:26:12.808658 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:26:12.808668 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 19:26:12.808682 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:26:12.808693 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:26:12.808709 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 19:26:12.808722 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 19:26:12.808734 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 19:26:12.808747 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 19:26:12.808759 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 19:26:12.808770 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 19:26:12.808782 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 19:26:12.808794 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:26:12.808810 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 19:26:12.808826 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:26:12.808874 systemd-journald[203]: Collecting audit messages is disabled. Jan 23 19:26:12.808904 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 19:26:12.808917 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 19:26:12.808933 systemd-journald[203]: Journal started Jan 23 19:26:12.808959 systemd-journald[203]: Runtime Journal (/run/log/journal/4fa04f343fef4f97aa4e7d6ed164d193) is 6M, max 48.3M, 42.2M free. Jan 23 19:26:12.844648 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 19:26:12.853726 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:26:12.866000 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:26:12.881456 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 19:26:13.085909 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 19:26:13.085971 kernel: Bridge firewalling registered Jan 23 19:26:12.883406 systemd-modules-load[204]: Inserted module 'overlay' Jan 23 19:26:12.933999 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 23 19:26:13.084905 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 19:26:13.089357 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:26:13.134946 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:26:13.161777 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 19:26:13.208721 systemd-tmpfiles[215]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 19:26:13.217831 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:26:13.218719 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:26:13.221765 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:26:13.226322 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:26:13.259894 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 19:26:13.276765 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 19:26:13.332787 systemd-resolved[233]: Positive Trust Anchors: Jan 23 19:26:13.332831 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:26:13.332874 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:26:13.387304 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:26:13.336429 systemd-resolved[233]: Defaulting to hostname 'linux'. Jan 23 19:26:13.338085 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:26:13.343297 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:26:13.597329 kernel: SCSI subsystem initialized Jan 23 19:26:13.616075 kernel: Loading iSCSI transport class v2.0-870. Jan 23 19:26:13.647874 kernel: iscsi: registered transport (tcp) Jan 23 19:26:13.682401 kernel: iscsi: registered transport (qla4xxx) Jan 23 19:26:13.683058 kernel: QLogic iSCSI HBA Driver Jan 23 19:26:13.745655 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 19:26:13.795963 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:26:13.806576 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 19:26:13.993065 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 19:26:14.001738 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 19:26:14.110343 kernel: raid6: avx2x4 gen() 27914 MB/s Jan 23 19:26:14.130578 kernel: raid6: avx2x2 gen() 21599 MB/s Jan 23 19:26:14.151187 kernel: raid6: avx2x1 gen() 12912 MB/s Jan 23 19:26:14.151285 kernel: raid6: using algorithm avx2x4 gen() 27914 MB/s Jan 23 19:26:14.172971 kernel: raid6: .... xor() 3865 MB/s, rmw enabled Jan 23 19:26:14.173016 kernel: raid6: using avx2x2 recovery algorithm Jan 23 19:26:14.221841 kernel: xor: automatically using best checksumming function avx Jan 23 19:26:14.545655 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 19:26:14.564993 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 19:26:14.576063 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:26:14.639935 systemd-udevd[452]: Using default interface naming scheme 'v255'. Jan 23 19:26:14.655614 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:26:14.670664 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 19:26:14.746790 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Jan 23 19:26:14.846752 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 19:26:14.848491 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 19:26:15.025883 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:26:15.073038 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 19:26:15.233923 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:26:15.234347 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 19:26:15.234113 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:26:15.401594 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 23 19:26:15.407651 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 19:26:15.407681 kernel: GPT:9289727 != 19775487 Jan 23 19:26:15.407697 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 19:26:15.407714 kernel: GPT:9289727 != 19775487 Jan 23 19:26:15.407727 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 19:26:15.407741 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:26:15.407757 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 19:26:15.401781 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:26:15.412680 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:26:15.419604 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:26:15.470425 kernel: libata version 3.00 loaded. Jan 23 19:26:15.484316 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 19:26:15.489311 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 19:26:15.489373 kernel: AES CTR mode by8 optimization enabled Jan 23 19:26:15.493419 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 19:26:15.493784 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 19:26:15.494029 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 19:26:15.503372 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 19:26:15.503451 kernel: scsi host0: ahci Jan 23 19:26:15.509619 kernel: scsi host1: ahci Jan 23 19:26:15.520364 kernel: scsi host2: ahci Jan 23 19:26:15.523608 kernel: scsi host3: ahci Jan 23 19:26:15.530310 kernel: scsi host4: ahci Jan 23 19:26:15.539418 kernel: scsi host5: ahci Jan 23 19:26:15.540314 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 lpm-pol 1 Jan 23 19:26:15.540354 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 lpm-pol 1 Jan 23 19:26:15.540373 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 lpm-pol 1 Jan 23 19:26:15.540390 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 lpm-pol 1 Jan 23 19:26:15.540407 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 lpm-pol 1 Jan 23 19:26:15.540424 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 lpm-pol 1 Jan 23 19:26:15.585435 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 19:26:15.826328 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:26:15.859085 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 19:26:15.859157 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 19:26:15.859847 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 19:26:15.866596 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 19:26:15.868922 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 19:26:15.883071 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 19:26:15.883111 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 19:26:15.883130 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 19:26:15.883145 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 19:26:15.889156 kernel: ata3.00: applying bridge limits Jan 23 19:26:15.908615 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 19:26:15.908681 kernel: ata3.00: configured for UDMA/100 Jan 23 19:26:15.913155 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 19:26:15.917586 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 19:26:15.948002 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 19:26:15.960148 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 19:26:15.981980 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 19:26:16.026352 disk-uuid[618]: Primary Header is updated. Jan 23 19:26:16.026352 disk-uuid[618]: Secondary Entries is updated. Jan 23 19:26:16.026352 disk-uuid[618]: Secondary Header is updated. Jan 23 19:26:16.048378 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 19:26:16.049725 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 19:26:16.056992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:26:16.075815 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 19:26:16.561622 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 19:26:16.590751 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 19:26:16.596397 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:26:16.603848 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 19:26:16.612003 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 19:26:16.702507 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 19:26:17.088444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:26:17.091721 disk-uuid[619]: The operation has completed successfully. Jan 23 19:26:17.165146 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 19:26:17.166636 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 19:26:17.228503 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 19:26:17.267064 sh[649]: Success Jan 23 19:26:17.320208 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 19:26:17.320360 kernel: device-mapper: uevent: version 1.0.3 Jan 23 19:26:17.326830 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 19:26:17.376807 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 19:26:17.475100 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 19:26:17.492773 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 19:26:17.499739 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 19:26:17.558636 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (661) Jan 23 19:26:17.567555 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 19:26:17.573888 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:26:17.611983 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 19:26:17.612067 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 19:26:17.619347 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 19:26:17.627826 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 19:26:17.643395 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 19:26:17.659854 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 19:26:17.669854 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 19:26:17.743627 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (690) Jan 23 19:26:17.761002 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:26:17.761097 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:26:17.804944 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:26:17.805044 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:26:17.826499 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:26:17.853207 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 19:26:17.862855 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 19:26:18.060043 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 19:26:18.061314 ignition[761]: Ignition 2.22.0 Jan 23 19:26:18.061324 ignition[761]: Stage: fetch-offline Jan 23 19:26:18.072585 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:26:18.061366 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:26:18.061378 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:26:18.061488 ignition[761]: parsed url from cmdline: "" Jan 23 19:26:18.061494 ignition[761]: no config URL provided Jan 23 19:26:18.061501 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 19:26:18.061512 ignition[761]: no config at "/usr/lib/ignition/user.ign" Jan 23 19:26:18.061582 ignition[761]: op(1): [started] loading QEMU firmware config module Jan 23 19:26:18.061590 ignition[761]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 19:26:18.102035 ignition[761]: op(1): [finished] loading QEMU firmware config module Jan 23 19:26:18.203174 systemd-networkd[838]: lo: Link UP Jan 23 19:26:18.203213 systemd-networkd[838]: lo: Gained carrier Jan 23 19:26:18.224126 systemd-networkd[838]: Enumeration completed Jan 23 19:26:18.235119 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:26:18.235125 systemd-networkd[838]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:26:18.235918 systemd-networkd[838]: eth0: Link UP Jan 23 19:26:18.236309 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:26:18.273198 systemd-networkd[838]: eth0: Gained carrier Jan 23 19:26:18.273223 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:26:18.273912 systemd[1]: Reached target network.target - Network. Jan 23 19:26:18.357383 systemd-networkd[838]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 19:26:18.457679 ignition[761]: parsing config with SHA512: 8139b5dff098890bf9c2f107cf5cfbc104330c54117773e2058d04681093f028851001d61cef070d02529436b71e2c615368e1cd70b3a2243de3b6704a9570bc Jan 23 19:26:18.478493 unknown[761]: fetched base config from "system" Jan 23 19:26:18.479081 ignition[761]: fetch-offline: fetch-offline passed Jan 23 19:26:18.478511 unknown[761]: fetched user config from "qemu" Jan 23 19:26:18.479160 ignition[761]: Ignition finished successfully Jan 23 19:26:18.499586 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 19:26:18.548077 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 19:26:18.553717 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 19:26:18.652777 ignition[843]: Ignition 2.22.0 Jan 23 19:26:18.652829 ignition[843]: Stage: kargs Jan 23 19:26:18.653009 ignition[843]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:26:18.653025 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:26:18.667146 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 19:26:18.653884 ignition[843]: kargs: kargs passed Jan 23 19:26:18.679829 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 19:26:18.653928 ignition[843]: Ignition finished successfully Jan 23 19:26:18.762641 ignition[851]: Ignition 2.22.0 Jan 23 19:26:18.762684 ignition[851]: Stage: disks Jan 23 19:26:18.762879 ignition[851]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:26:18.762894 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:26:18.777123 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 19:26:18.764731 ignition[851]: disks: disks passed Jan 23 19:26:18.797861 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 19:26:18.764794 ignition[851]: Ignition finished successfully Jan 23 19:26:18.807754 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 19:26:18.824629 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 19:26:18.824733 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:26:18.824780 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:26:18.867851 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 19:26:18.947730 systemd-fsck[861]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 19:26:18.962928 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 19:26:18.986097 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 19:26:19.375889 kernel: EXT4-fs (vda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 19:26:19.379827 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 19:26:19.396787 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 19:26:19.415908 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 19:26:19.440776 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 19:26:19.450142 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 19:26:19.450208 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 19:26:19.503900 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (869) Jan 23 19:26:19.450303 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 19:26:19.519758 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:26:19.519797 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:26:19.505115 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 19:26:19.541408 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 19:26:19.587927 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:26:19.588015 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:26:19.591523 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 19:26:19.735882 initrd-setup-root[893]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 19:26:19.763357 initrd-setup-root[900]: cut: /sysroot/etc/group: No such file or directory Jan 23 19:26:19.774192 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 19:26:19.785909 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 19:26:20.100874 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 19:26:20.107595 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 19:26:20.125983 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 19:26:20.141514 systemd-networkd[838]: eth0: Gained IPv6LL Jan 23 19:26:20.186190 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 19:26:20.198405 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:26:20.253787 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 19:26:20.287309 ignition[983]: INFO : Ignition 2.22.0 Jan 23 19:26:20.287309 ignition[983]: INFO : Stage: mount Jan 23 19:26:20.296544 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:26:20.296544 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:26:20.296544 ignition[983]: INFO : mount: mount passed Jan 23 19:26:20.296544 ignition[983]: INFO : Ignition finished successfully Jan 23 19:26:20.309109 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 19:26:20.339629 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 19:26:20.390705 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 19:26:20.445661 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (996) Jan 23 19:26:20.458837 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:26:20.458897 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:26:20.494422 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:26:20.494521 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:26:20.501658 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 19:26:20.589413 ignition[1013]: INFO : Ignition 2.22.0 Jan 23 19:26:20.589413 ignition[1013]: INFO : Stage: files Jan 23 19:26:20.599125 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:26:20.599125 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:26:20.599125 ignition[1013]: DEBUG : files: compiled without relabeling support, skipping Jan 23 19:26:20.625919 ignition[1013]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 19:26:20.625919 ignition[1013]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 19:26:20.649675 ignition[1013]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 19:26:20.657420 ignition[1013]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 19:26:20.665172 ignition[1013]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 19:26:20.658899 unknown[1013]: wrote ssh authorized keys file for user: core Jan 23 19:26:20.677185 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 19:26:20.677185 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 19:26:20.742630 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 19:26:20.944925 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 19:26:20.944925 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 19:26:20.944925 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 19:26:20.944925 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 19:26:20.944925 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 19:26:20.944925 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 19:26:20.944925 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 19:26:20.944925 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 19:26:20.944925 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 19:26:21.073914 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 19:26:21.073914 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 19:26:21.073914 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:26:21.073914 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:26:21.073914 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:26:21.073914 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 19:26:21.339942 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 19:26:22.220131 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:26:22.220131 ignition[1013]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 19:26:22.240380 ignition[1013]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 19:26:22.240380 ignition[1013]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 19:26:22.240380 ignition[1013]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 19:26:22.240380 ignition[1013]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 19:26:22.240380 ignition[1013]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 19:26:22.240380 ignition[1013]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 19:26:22.240380 ignition[1013]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 19:26:22.240380 ignition[1013]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 19:26:22.318895 ignition[1013]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 19:26:22.332049 ignition[1013]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 19:26:22.340671 ignition[1013]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 19:26:22.340671 ignition[1013]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 23 19:26:22.340671 ignition[1013]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 19:26:22.357699 ignition[1013]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 19:26:22.357699 ignition[1013]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 19:26:22.357699 ignition[1013]: INFO : files: files passed Jan 23 19:26:22.357699 ignition[1013]: INFO : Ignition finished successfully Jan 23 19:26:22.382872 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 19:26:22.397001 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 19:26:22.427669 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 19:26:22.437962 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 19:26:22.438108 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 19:26:22.478511 initrd-setup-root-after-ignition[1043]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 19:26:22.502021 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:26:22.502021 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:26:22.517442 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:26:22.525127 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 19:26:22.531441 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 19:26:22.550561 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 19:26:22.694626 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 19:26:22.694851 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 19:26:22.715613 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 19:26:22.723378 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 19:26:22.742994 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 19:26:22.744534 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 19:26:22.830690 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 19:26:22.835833 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 19:26:22.893635 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:26:22.915827 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:26:22.936958 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 19:26:22.942274 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 19:26:22.942535 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 19:26:22.975797 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 19:26:22.975950 systemd[1]: Stopped target basic.target - Basic System. Jan 23 19:26:22.976091 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 19:26:22.976326 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 19:26:22.976471 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 19:26:22.976646 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 19:26:22.976785 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 19:26:22.976912 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 19:26:22.977060 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 19:26:22.977201 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 19:26:22.977408 systemd[1]: Stopped target swap.target - Swaps. Jan 23 19:26:22.977509 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 19:26:22.977737 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 19:26:22.978012 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:26:22.978155 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:26:22.978312 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 19:26:22.978963 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:26:23.110862 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 19:26:23.111088 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 19:26:23.245380 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 19:26:23.245557 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 19:26:23.245816 systemd[1]: Stopped target paths.target - Path Units. Jan 23 19:26:23.245912 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 19:26:23.252666 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:26:23.297012 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 19:26:23.298917 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 19:26:23.318986 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 19:26:23.319141 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 19:26:23.404146 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 19:26:23.404401 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 19:26:23.420021 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 19:26:23.420328 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 19:26:23.424182 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 19:26:23.424452 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 19:26:23.446503 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 19:26:23.449629 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 19:26:23.449909 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:26:23.495555 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 19:26:23.509466 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 19:26:23.509967 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:26:23.527966 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 19:26:23.528142 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 19:26:23.562543 ignition[1069]: INFO : Ignition 2.22.0 Jan 23 19:26:23.562543 ignition[1069]: INFO : Stage: umount Jan 23 19:26:23.562543 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:26:23.562543 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:26:23.596290 ignition[1069]: INFO : umount: umount passed Jan 23 19:26:23.596290 ignition[1069]: INFO : Ignition finished successfully Jan 23 19:26:23.588479 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 19:26:23.591699 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 19:26:23.591879 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 19:26:23.604339 systemd[1]: Stopped target network.target - Network. Jan 23 19:26:23.623491 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 19:26:23.623994 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 19:26:23.641890 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 19:26:23.642009 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 19:26:23.655968 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 19:26:23.656081 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 19:26:23.661818 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 19:26:23.661903 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 19:26:23.679354 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 19:26:23.703933 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 19:26:23.728367 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 19:26:23.728667 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 19:26:23.729166 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 19:26:23.729373 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 19:26:23.748361 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 19:26:23.749024 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 19:26:23.749189 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 19:26:23.761066 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 19:26:23.761177 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 19:26:23.769667 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 19:26:23.769771 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:26:23.821338 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:26:23.821852 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 19:26:23.822059 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 19:26:23.873413 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 19:26:23.874344 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 19:26:23.945315 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 19:26:23.945715 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:26:23.961931 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 19:26:23.970665 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 19:26:23.970868 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 19:26:24.005761 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 19:26:24.006509 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:26:24.029122 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 19:26:24.029300 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 19:26:24.043051 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:26:24.065445 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 19:26:24.102401 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 19:26:24.103119 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 19:26:24.108986 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 19:26:24.109554 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:26:24.117949 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 19:26:24.118054 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 19:26:24.139559 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 19:26:24.139680 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:26:24.144346 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 19:26:24.144431 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 19:26:24.167090 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 19:26:24.168722 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 19:26:24.232624 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 19:26:24.232753 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 19:26:24.270994 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 19:26:24.283444 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 19:26:24.283619 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:26:24.295730 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 19:26:24.296977 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:26:24.325435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:26:24.325528 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:26:24.372913 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 19:26:24.375380 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 19:26:24.384423 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 19:26:24.421673 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 19:26:24.470692 systemd[1]: Switching root. Jan 23 19:26:24.549677 systemd-journald[203]: Journal stopped Jan 23 19:26:27.848216 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 23 19:26:27.848392 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 19:26:27.848407 kernel: SELinux: policy capability open_perms=1 Jan 23 19:26:27.848418 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 19:26:27.848428 kernel: SELinux: policy capability always_check_network=0 Jan 23 19:26:27.848440 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 19:26:27.848454 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 19:26:27.848464 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 19:26:27.848474 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 19:26:27.848488 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 19:26:27.848498 kernel: audit: type=1403 audit(1769196384.862:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 19:26:27.848510 systemd[1]: Successfully loaded SELinux policy in 115.296ms. Jan 23 19:26:27.848537 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 19.158ms. Jan 23 19:26:27.848549 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 19:26:27.848562 systemd[1]: Detected virtualization kvm. Jan 23 19:26:27.848575 systemd[1]: Detected architecture x86-64. Jan 23 19:26:27.848586 systemd[1]: Detected first boot. Jan 23 19:26:27.848596 systemd[1]: Initializing machine ID from VM UUID. Jan 23 19:26:27.848607 zram_generator::config[1114]: No configuration found. Jan 23 19:26:27.848650 kernel: Guest personality initialized and is inactive Jan 23 19:26:27.848666 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 19:26:27.848684 kernel: Initialized host personality Jan 23 19:26:27.848709 kernel: NET: Registered PF_VSOCK protocol family Jan 23 19:26:27.848729 systemd[1]: Populated /etc with preset unit settings. Jan 23 19:26:27.848751 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 19:26:27.848772 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 19:26:27.848792 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 19:26:27.848810 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 19:26:27.848831 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 19:26:27.848850 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 19:26:27.848868 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 19:26:27.848900 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 19:26:27.848919 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 19:26:27.848937 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 19:26:27.848959 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 19:26:27.848980 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 19:26:27.849000 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:26:27.849016 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:26:27.849032 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 19:26:27.849048 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 19:26:27.849068 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 19:26:27.849085 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 19:26:27.849100 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 19:26:27.849116 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:26:27.849131 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:26:27.849147 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 19:26:27.849162 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 19:26:27.849181 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 19:26:27.849197 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 19:26:27.849212 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:26:27.849295 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 19:26:27.849317 systemd[1]: Reached target slices.target - Slice Units. Jan 23 19:26:27.849333 systemd[1]: Reached target swap.target - Swaps. Jan 23 19:26:27.849348 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 19:26:27.849364 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 19:26:27.849379 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 19:26:27.849399 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:26:27.849416 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 19:26:27.849431 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:26:27.849446 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 19:26:27.849462 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 19:26:27.849477 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 19:26:27.849492 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 19:26:27.849508 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:26:27.849524 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 19:26:27.849543 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 19:26:27.849559 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 19:26:27.849576 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 19:26:27.849592 systemd[1]: Reached target machines.target - Containers. Jan 23 19:26:27.849608 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 19:26:27.849666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:26:27.849685 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 19:26:27.849702 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 19:26:27.849718 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:26:27.849739 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:26:27.849755 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:26:27.849771 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 19:26:27.849787 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:26:27.849804 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 19:26:27.849820 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 19:26:27.849836 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 19:26:27.849852 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 19:26:27.849871 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 19:26:27.849889 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:26:27.849907 kernel: ACPI: bus type drm_connector registered Jan 23 19:26:27.849925 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 19:26:27.849941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 19:26:27.849957 kernel: fuse: init (API version 7.41) Jan 23 19:26:27.849973 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 19:26:27.849990 kernel: loop: module loaded Jan 23 19:26:27.850008 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 19:26:27.850076 systemd-journald[1199]: Collecting audit messages is disabled. Jan 23 19:26:27.850112 systemd-journald[1199]: Journal started Jan 23 19:26:27.850148 systemd-journald[1199]: Runtime Journal (/run/log/journal/4fa04f343fef4f97aa4e7d6ed164d193) is 6M, max 48.3M, 42.2M free. Jan 23 19:26:26.322005 systemd[1]: Queued start job for default target multi-user.target. Jan 23 19:26:26.367380 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 19:26:26.368454 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 19:26:26.369400 systemd[1]: systemd-journald.service: Consumed 1.249s CPU time. Jan 23 19:26:27.868585 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 19:26:27.892329 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 19:26:27.920612 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 19:26:27.921075 systemd[1]: Stopped verity-setup.service. Jan 23 19:26:27.921109 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:26:27.944612 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 19:26:27.954841 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 19:26:27.964931 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 19:26:27.970207 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 19:26:27.975520 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 19:26:27.986588 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 19:26:27.992493 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 19:26:28.006458 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 19:26:28.020543 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:26:28.032492 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 19:26:28.032986 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 19:26:28.042498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:26:28.046425 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:26:28.053616 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:26:28.054421 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:26:28.062775 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:26:28.063123 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:26:28.077821 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 19:26:28.078167 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 19:26:28.082692 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:26:28.083685 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:26:28.090138 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 19:26:28.103081 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:26:28.130378 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 19:26:28.146600 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 19:26:28.185613 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:26:28.226761 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 19:26:28.236786 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 19:26:28.243372 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 19:26:28.252998 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 19:26:28.253056 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 19:26:28.259198 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 19:26:28.280984 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 19:26:28.291821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:26:28.296452 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 19:26:28.304709 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 19:26:28.329427 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:26:28.333327 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 19:26:28.339921 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:26:28.348465 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:26:28.371608 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 19:26:28.384442 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 19:26:28.408327 systemd-journald[1199]: Time spent on flushing to /var/log/journal/4fa04f343fef4f97aa4e7d6ed164d193 is 70.438ms for 972 entries. Jan 23 19:26:28.408327 systemd-journald[1199]: System Journal (/var/log/journal/4fa04f343fef4f97aa4e7d6ed164d193) is 8M, max 195.6M, 187.6M free. Jan 23 19:26:28.553502 systemd-journald[1199]: Received client request to flush runtime journal. Jan 23 19:26:28.553555 kernel: loop0: detected capacity change from 0 to 128560 Jan 23 19:26:28.394127 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 19:26:28.403701 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 19:26:28.429748 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 19:26:28.448375 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 19:26:28.473683 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 19:26:28.672829 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 19:26:28.681295 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 19:26:28.682154 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:26:28.690326 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 19:26:28.699202 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 19:26:28.748449 kernel: loop1: detected capacity change from 0 to 229808 Jan 23 19:26:28.755210 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 19:26:28.761008 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 19:26:28.782949 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jan 23 19:26:28.782992 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jan 23 19:26:28.789765 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:26:28.831568 kernel: loop2: detected capacity change from 0 to 110984 Jan 23 19:26:29.026470 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 19:26:29.076890 kernel: hrtimer: interrupt took 13740748 ns Jan 23 19:26:29.288175 kernel: loop4: detected capacity change from 0 to 229808 Jan 23 19:26:29.325062 kernel: loop5: detected capacity change from 0 to 110984 Jan 23 19:26:29.357336 (sd-merge)[1258]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 23 19:26:29.358466 (sd-merge)[1258]: Merged extensions into '/usr'. Jan 23 19:26:29.370721 systemd[1]: Reload requested from client PID 1234 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 19:26:29.370794 systemd[1]: Reloading... Jan 23 19:26:29.466351 zram_generator::config[1281]: No configuration found. Jan 23 19:26:30.163979 ldconfig[1229]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 19:26:30.173968 systemd[1]: Reloading finished in 802 ms. Jan 23 19:26:30.218083 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 19:26:30.231388 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 19:26:30.247406 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 19:26:30.274146 systemd[1]: Starting ensure-sysext.service... Jan 23 19:26:30.278872 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:26:30.287761 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:26:30.318064 systemd[1]: Reload requested from client PID 1323 ('systemctl') (unit ensure-sysext.service)... Jan 23 19:26:30.318110 systemd[1]: Reloading... Jan 23 19:26:30.331516 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 19:26:30.332017 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 19:26:30.332727 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 19:26:30.333312 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 19:26:30.335159 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 19:26:30.335932 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Jan 23 19:26:30.336048 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Jan 23 19:26:30.347045 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:26:30.347088 systemd-tmpfiles[1324]: Skipping /boot Jan 23 19:26:30.360562 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jan 23 19:26:30.372071 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:26:30.372091 systemd-tmpfiles[1324]: Skipping /boot Jan 23 19:26:30.414352 zram_generator::config[1348]: No configuration found. Jan 23 19:26:30.725534 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 19:26:30.725681 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 19:26:30.742405 kernel: ACPI: button: Power Button [PWRF] Jan 23 19:26:30.799304 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 19:26:30.808492 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 19:26:30.951021 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 19:26:30.957834 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 19:26:30.958452 systemd[1]: Reloading finished in 639 ms. Jan 23 19:26:30.975630 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:26:31.130881 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:26:31.185984 systemd[1]: Finished ensure-sysext.service. Jan 23 19:26:31.190183 kernel: kvm_amd: TSC scaling supported Jan 23 19:26:31.190327 kernel: kvm_amd: Nested Virtualization enabled Jan 23 19:26:31.190352 kernel: kvm_amd: Nested Paging enabled Jan 23 19:26:31.190379 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 19:26:31.201332 kernel: kvm_amd: PMU virtualization is disabled Jan 23 19:26:31.246872 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:26:31.249220 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:26:31.255187 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 19:26:31.259792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:26:31.264512 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:26:31.392423 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:26:31.417495 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:26:31.447192 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:26:31.456205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:26:31.463817 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 19:26:31.470755 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:26:31.474376 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 19:26:31.486045 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:26:31.502751 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:26:31.513425 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 19:26:31.519983 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 19:26:31.525332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:26:31.525471 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:26:31.528331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:26:31.528647 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:26:31.530296 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:26:31.530760 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:26:31.533584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:26:31.533908 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:26:31.535570 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:26:31.535943 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:26:31.564912 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 19:26:31.592113 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:26:31.592207 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:26:31.595103 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 19:26:31.601415 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 19:26:31.609517 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 19:26:31.627706 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 19:26:31.631473 augenrules[1485]: No rules Jan 23 19:26:31.634349 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:26:31.634697 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:26:31.682939 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 19:26:31.690481 kernel: EDAC MC: Ver: 3.0.0 Jan 23 19:26:31.705921 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 19:26:31.707347 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 19:26:31.742858 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 19:26:31.873637 systemd-networkd[1457]: lo: Link UP Jan 23 19:26:31.873651 systemd-networkd[1457]: lo: Gained carrier Jan 23 19:26:31.876450 systemd-networkd[1457]: Enumeration completed Jan 23 19:26:31.876620 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:26:31.877802 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:26:31.877837 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:26:31.879521 systemd-networkd[1457]: eth0: Link UP Jan 23 19:26:31.879828 systemd-networkd[1457]: eth0: Gained carrier Jan 23 19:26:31.879881 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:26:31.901351 systemd-networkd[1457]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 19:26:31.902414 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Jan 23 19:26:32.600474 systemd-timesyncd[1462]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 19:26:32.600585 systemd-timesyncd[1462]: Initial clock synchronization to Fri 2026-01-23 19:26:32.600347 UTC. Jan 23 19:26:32.605566 systemd-resolved[1460]: Positive Trust Anchors: Jan 23 19:26:32.605581 systemd-resolved[1460]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:26:32.605629 systemd-resolved[1460]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:26:32.613394 systemd-resolved[1460]: Defaulting to hostname 'linux'. Jan 23 19:26:32.657158 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 19:26:32.662887 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:26:32.669651 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:26:32.678260 systemd[1]: Reached target network.target - Network. Jan 23 19:26:32.682926 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:26:32.688180 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:26:32.694747 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 19:26:32.699461 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 19:26:32.705626 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 19:26:32.712563 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 19:26:32.716670 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 19:26:32.716924 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:26:32.721385 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 19:26:32.726445 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 19:26:32.731659 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 19:26:32.741248 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:26:32.749848 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 19:26:32.758163 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 19:26:32.766366 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 19:26:32.775742 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 19:26:32.782633 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 19:26:32.803570 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 19:26:32.809674 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 19:26:32.818932 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 19:26:32.827929 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 19:26:32.834055 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 19:26:32.847818 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:26:32.852065 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:26:32.859481 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:26:32.860107 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:26:32.869758 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 19:26:32.881446 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 19:26:32.888806 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 19:26:32.896776 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 19:26:32.906081 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 19:26:32.916445 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 19:26:32.923056 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 19:26:32.927573 jq[1515]: false Jan 23 19:26:32.932955 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 19:26:32.938959 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 19:26:32.943217 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Refreshing passwd entry cache Jan 23 19:26:32.943221 oslogin_cache_refresh[1517]: Refreshing passwd entry cache Jan 23 19:26:32.955967 extend-filesystems[1516]: Found /dev/vda6 Jan 23 19:26:32.956640 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 19:26:32.972394 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Failure getting users, quitting Jan 23 19:26:32.972394 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:26:32.972394 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Refreshing group entry cache Jan 23 19:26:32.970564 oslogin_cache_refresh[1517]: Failure getting users, quitting Jan 23 19:26:32.970589 oslogin_cache_refresh[1517]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:26:32.970665 oslogin_cache_refresh[1517]: Refreshing group entry cache Jan 23 19:26:32.975157 extend-filesystems[1516]: Found /dev/vda9 Jan 23 19:26:32.975006 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 19:26:32.986474 extend-filesystems[1516]: Checking size of /dev/vda9 Jan 23 19:26:32.989045 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 19:26:32.992056 oslogin_cache_refresh[1517]: Failure getting groups, quitting Jan 23 19:26:32.995642 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Failure getting groups, quitting Jan 23 19:26:32.995642 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:26:32.992072 oslogin_cache_refresh[1517]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:26:33.001087 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 19:26:33.004518 extend-filesystems[1516]: Resized partition /dev/vda9 Jan 23 19:26:33.013240 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 19:26:33.034474 extend-filesystems[1537]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 19:26:33.059446 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 23 19:26:33.018040 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 19:26:33.027683 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 19:26:33.059869 jq[1540]: true Jan 23 19:26:33.040424 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 19:26:33.073150 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 19:26:33.083610 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 19:26:33.087430 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 19:26:33.090323 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 19:26:33.091341 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 19:26:33.101205 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 19:26:33.101685 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 19:26:33.327656 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 19:26:33.328154 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 19:26:33.474352 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 23 19:26:33.544227 extend-filesystems[1537]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 19:26:33.544227 extend-filesystems[1537]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 19:26:33.544227 extend-filesystems[1537]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 23 19:26:33.576168 extend-filesystems[1516]: Resized filesystem in /dev/vda9 Jan 23 19:26:33.597121 jq[1547]: true Jan 23 19:26:33.550865 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 19:26:33.580329 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 19:26:33.581958 (ntainerd)[1548]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 19:26:33.629432 update_engine[1538]: I20260123 19:26:33.629187 1538 main.cc:92] Flatcar Update Engine starting Jan 23 19:26:33.653363 systemd-logind[1534]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 19:26:33.653435 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 19:26:33.654204 systemd-logind[1534]: New seat seat0. Jan 23 19:26:33.728113 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 19:26:33.792069 tar[1546]: linux-amd64/LICENSE Jan 23 19:26:33.797181 tar[1546]: linux-amd64/helm Jan 23 19:26:33.849830 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 19:26:33.916880 dbus-daemon[1513]: [system] SELinux support is enabled Jan 23 19:26:33.917396 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 19:26:33.923982 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 19:26:33.924019 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 19:26:33.929460 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 19:26:33.929498 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 19:26:33.939641 bash[1578]: Updated "/home/core/.ssh/authorized_keys" Jan 23 19:26:33.942157 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 19:26:33.952184 update_engine[1538]: I20260123 19:26:33.951902 1538 update_check_scheduler.cc:74] Next update check in 8m23s Jan 23 19:26:33.954993 dbus-daemon[1513]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 19:26:33.955535 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 19:26:33.955835 systemd[1]: Started update-engine.service - Update Engine. Jan 23 19:26:33.974763 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 19:26:34.070012 locksmithd[1581]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 19:26:34.210771 containerd[1548]: time="2026-01-23T19:26:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 19:26:34.212077 containerd[1548]: time="2026-01-23T19:26:34.211386239Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 19:26:34.240351 containerd[1548]: time="2026-01-23T19:26:34.240171680Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.089µs" Jan 23 19:26:34.240351 containerd[1548]: time="2026-01-23T19:26:34.240251680Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 19:26:34.240351 containerd[1548]: time="2026-01-23T19:26:34.240342139Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 19:26:34.243831 containerd[1548]: time="2026-01-23T19:26:34.241208365Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 19:26:34.243831 containerd[1548]: time="2026-01-23T19:26:34.241323430Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 19:26:34.243831 containerd[1548]: time="2026-01-23T19:26:34.241367012Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:26:34.243831 containerd[1548]: time="2026-01-23T19:26:34.241470124Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:26:34.243831 containerd[1548]: time="2026-01-23T19:26:34.241488809Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:26:34.243831 containerd[1548]: time="2026-01-23T19:26:34.241914504Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:26:34.243831 containerd[1548]: time="2026-01-23T19:26:34.241935312Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:26:34.243831 containerd[1548]: time="2026-01-23T19:26:34.241949489Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:26:34.243831 containerd[1548]: time="2026-01-23T19:26:34.241960339Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 19:26:34.243831 containerd[1548]: time="2026-01-23T19:26:34.242075374Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 19:26:34.244161 containerd[1548]: time="2026-01-23T19:26:34.244047474Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:26:34.244161 containerd[1548]: time="2026-01-23T19:26:34.244092770Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:26:34.244161 containerd[1548]: time="2026-01-23T19:26:34.244108449Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 19:26:34.244161 containerd[1548]: time="2026-01-23T19:26:34.244149505Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 19:26:34.244597 containerd[1548]: time="2026-01-23T19:26:34.244510399Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 19:26:34.244654 containerd[1548]: time="2026-01-23T19:26:34.244635642Z" level=info msg="metadata content store policy set" policy=shared Jan 23 19:26:34.253580 containerd[1548]: time="2026-01-23T19:26:34.253348650Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 19:26:34.253580 containerd[1548]: time="2026-01-23T19:26:34.253561728Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 19:26:34.253580 containerd[1548]: time="2026-01-23T19:26:34.253586674Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 19:26:34.253772 containerd[1548]: time="2026-01-23T19:26:34.253606912Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 19:26:34.253772 containerd[1548]: time="2026-01-23T19:26:34.253624365Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 19:26:34.253772 containerd[1548]: time="2026-01-23T19:26:34.253637639Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 19:26:34.253772 containerd[1548]: time="2026-01-23T19:26:34.253651866Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 19:26:34.253772 containerd[1548]: time="2026-01-23T19:26:34.253666804Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 19:26:34.253772 containerd[1548]: time="2026-01-23T19:26:34.253686099Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 19:26:34.253772 containerd[1548]: time="2026-01-23T19:26:34.253699555Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 19:26:34.253772 containerd[1548]: time="2026-01-23T19:26:34.253759006Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 19:26:34.253772 containerd[1548]: time="2026-01-23T19:26:34.253777691Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 19:26:34.254235 containerd[1548]: time="2026-01-23T19:26:34.254068804Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 19:26:34.254235 containerd[1548]: time="2026-01-23T19:26:34.254136570Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 19:26:34.254235 containerd[1548]: time="2026-01-23T19:26:34.254164663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 19:26:34.254235 containerd[1548]: time="2026-01-23T19:26:34.254182626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 19:26:34.254235 containerd[1548]: time="2026-01-23T19:26:34.254204638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 19:26:34.254235 containerd[1548]: time="2026-01-23T19:26:34.254223743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 19:26:34.254235 containerd[1548]: time="2026-01-23T19:26:34.254235926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 19:26:34.254235 containerd[1548]: time="2026-01-23T19:26:34.254245795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 19:26:34.254918 containerd[1548]: time="2026-01-23T19:26:34.254256775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 19:26:34.254918 containerd[1548]: time="2026-01-23T19:26:34.254348456Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 19:26:34.254918 containerd[1548]: time="2026-01-23T19:26:34.254360348Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 19:26:34.254918 containerd[1548]: time="2026-01-23T19:26:34.254413798Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 19:26:34.254918 containerd[1548]: time="2026-01-23T19:26:34.254435929Z" level=info msg="Start snapshots syncer" Jan 23 19:26:34.254918 containerd[1548]: time="2026-01-23T19:26:34.254462800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 19:26:34.257846 containerd[1548]: time="2026-01-23T19:26:34.255935968Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 19:26:34.257846 containerd[1548]: time="2026-01-23T19:26:34.256030013Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 19:26:34.258103 containerd[1548]: time="2026-01-23T19:26:34.256093201Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 19:26:34.258103 containerd[1548]: time="2026-01-23T19:26:34.257226858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 19:26:34.258103 containerd[1548]: time="2026-01-23T19:26:34.257330000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 19:26:34.258103 containerd[1548]: time="2026-01-23T19:26:34.257351109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 19:26:34.258103 containerd[1548]: time="2026-01-23T19:26:34.257366628Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 19:26:34.258103 containerd[1548]: time="2026-01-23T19:26:34.257381646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 19:26:34.258103 containerd[1548]: time="2026-01-23T19:26:34.257395552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 19:26:34.258103 containerd[1548]: time="2026-01-23T19:26:34.257409719Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 19:26:34.258103 containerd[1548]: time="2026-01-23T19:26:34.257439214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 19:26:34.258103 containerd[1548]: time="2026-01-23T19:26:34.257466044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 19:26:34.258103 containerd[1548]: time="2026-01-23T19:26:34.257492644Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 19:26:34.258103 containerd[1548]: time="2026-01-23T19:26:34.257531736Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:26:34.258103 containerd[1548]: time="2026-01-23T19:26:34.257562754Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:26:34.258103 containerd[1548]: time="2026-01-23T19:26:34.257577071Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:26:34.258543 containerd[1548]: time="2026-01-23T19:26:34.257589455Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:26:34.258543 containerd[1548]: time="2026-01-23T19:26:34.257601527Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 19:26:34.258543 containerd[1548]: time="2026-01-23T19:26:34.257614110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 19:26:34.258543 containerd[1548]: time="2026-01-23T19:26:34.257637264Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 19:26:34.258543 containerd[1548]: time="2026-01-23T19:26:34.257661138Z" level=info msg="runtime interface created" Jan 23 19:26:34.258543 containerd[1548]: time="2026-01-23T19:26:34.257669113Z" level=info msg="created NRI interface" Jan 23 19:26:34.258543 containerd[1548]: time="2026-01-23T19:26:34.257686846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 19:26:34.258543 containerd[1548]: time="2026-01-23T19:26:34.257701934Z" level=info msg="Connect containerd service" Jan 23 19:26:34.258543 containerd[1548]: time="2026-01-23T19:26:34.257783617Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 19:26:34.263393 containerd[1548]: time="2026-01-23T19:26:34.261556309Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 19:26:34.524845 systemd-networkd[1457]: eth0: Gained IPv6LL Jan 23 19:26:34.589525 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 19:26:34.634948 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 19:26:34.643211 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 19:26:34.649966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:26:34.655921 tar[1546]: linux-amd64/README.md Jan 23 19:26:34.663328 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 19:26:34.692109 containerd[1548]: time="2026-01-23T19:26:34.691402420Z" level=info msg="Start subscribing containerd event" Jan 23 19:26:34.692109 containerd[1548]: time="2026-01-23T19:26:34.691498239Z" level=info msg="Start recovering state" Jan 23 19:26:34.692109 containerd[1548]: time="2026-01-23T19:26:34.691642698Z" level=info msg="Start event monitor" Jan 23 19:26:34.692109 containerd[1548]: time="2026-01-23T19:26:34.691662374Z" level=info msg="Start cni network conf syncer for default" Jan 23 19:26:34.692109 containerd[1548]: time="2026-01-23T19:26:34.691672133Z" level=info msg="Start streaming server" Jan 23 19:26:34.692109 containerd[1548]: time="2026-01-23T19:26:34.691687832Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 19:26:34.692109 containerd[1548]: time="2026-01-23T19:26:34.691698111Z" level=info msg="runtime interface starting up..." Jan 23 19:26:34.692109 containerd[1548]: time="2026-01-23T19:26:34.691705826Z" level=info msg="starting plugins..." Jan 23 19:26:34.692109 containerd[1548]: time="2026-01-23T19:26:34.691768532Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 19:26:34.694495 containerd[1548]: time="2026-01-23T19:26:34.694471598Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 19:26:34.695426 containerd[1548]: time="2026-01-23T19:26:34.695208513Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 19:26:34.701129 containerd[1548]: time="2026-01-23T19:26:34.699537364Z" level=info msg="containerd successfully booted in 0.490349s" Jan 23 19:26:34.699592 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 19:26:34.713640 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 19:26:34.740420 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 19:26:34.740989 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 19:26:34.750015 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 19:26:34.760573 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 19:26:35.479682 sshd_keygen[1544]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 19:26:35.527260 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 19:26:35.537564 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 19:26:35.547023 systemd[1]: Started sshd@0-10.0.0.128:22-10.0.0.1:43286.service - OpenSSH per-connection server daemon (10.0.0.1:43286). Jan 23 19:26:35.577039 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 19:26:35.578808 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 19:26:35.591058 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 19:26:35.619413 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 19:26:35.630617 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 19:26:35.640621 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 19:26:35.647984 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 19:26:35.716474 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 43286 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:26:35.721083 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:26:35.754189 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 19:26:35.762626 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 19:26:35.786451 systemd-logind[1534]: New session 1 of user core. Jan 23 19:26:35.800157 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 19:26:35.816592 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 19:26:35.844504 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 19:26:35.853416 systemd-logind[1534]: New session c1 of user core. Jan 23 19:26:36.109405 systemd[1651]: Queued start job for default target default.target. Jan 23 19:26:36.124962 systemd[1651]: Created slice app.slice - User Application Slice. Jan 23 19:26:36.125859 systemd[1651]: Reached target paths.target - Paths. Jan 23 19:26:36.126814 systemd[1651]: Reached target timers.target - Timers. Jan 23 19:26:36.131220 systemd[1651]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 19:26:36.157964 systemd[1651]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 19:26:36.159135 systemd[1651]: Reached target sockets.target - Sockets. Jan 23 19:26:36.159447 systemd[1651]: Reached target basic.target - Basic System. Jan 23 19:26:36.159544 systemd[1651]: Reached target default.target - Main User Target. Jan 23 19:26:36.159592 systemd[1651]: Startup finished in 283ms. Jan 23 19:26:36.160114 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 19:26:36.179974 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 19:26:36.276236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:26:36.291659 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 19:26:36.315608 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:26:36.318785 systemd[1]: Started sshd@1-10.0.0.128:22-10.0.0.1:40476.service - OpenSSH per-connection server daemon (10.0.0.1:40476). Jan 23 19:26:36.343401 systemd[1]: Startup finished in 7.906s (kernel) + 12.950s (initrd) + 10.898s (userspace) = 31.755s. Jan 23 19:26:36.478603 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 40476 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:26:36.481877 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:26:36.498174 systemd-logind[1534]: New session 2 of user core. Jan 23 19:26:36.507955 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 19:26:36.599351 sshd[1673]: Connection closed by 10.0.0.1 port 40476 Jan 23 19:26:36.585119 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Jan 23 19:26:36.666771 systemd[1]: sshd@1-10.0.0.128:22-10.0.0.1:40476.service: Deactivated successfully. Jan 23 19:26:36.673120 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 19:26:36.680189 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. Jan 23 19:26:36.686228 systemd-logind[1534]: Removed session 2. Jan 23 19:26:36.689233 systemd[1]: Started sshd@2-10.0.0.128:22-10.0.0.1:40482.service - OpenSSH per-connection server daemon (10.0.0.1:40482). Jan 23 19:26:36.801188 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 40482 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:26:36.800385 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:26:36.827693 systemd-logind[1534]: New session 3 of user core. Jan 23 19:26:36.848222 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 19:26:36.961434 sshd[1689]: Connection closed by 10.0.0.1 port 40482 Jan 23 19:26:36.963528 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Jan 23 19:26:37.058945 systemd[1]: sshd@2-10.0.0.128:22-10.0.0.1:40482.service: Deactivated successfully. Jan 23 19:26:37.110033 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 19:26:37.140885 systemd-logind[1534]: Session 3 logged out. Waiting for processes to exit. Jan 23 19:26:37.164533 systemd[1]: Started sshd@3-10.0.0.128:22-10.0.0.1:40498.service - OpenSSH per-connection server daemon (10.0.0.1:40498). Jan 23 19:26:37.228702 systemd-logind[1534]: Removed session 3. Jan 23 19:26:37.357791 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 40498 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:26:37.361911 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:26:37.377620 systemd-logind[1534]: New session 4 of user core. Jan 23 19:26:37.391944 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 19:26:37.540686 sshd[1698]: Connection closed by 10.0.0.1 port 40498 Jan 23 19:26:37.543374 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Jan 23 19:26:37.560149 systemd[1]: sshd@3-10.0.0.128:22-10.0.0.1:40498.service: Deactivated successfully. Jan 23 19:26:37.562857 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 19:26:37.565241 systemd-logind[1534]: Session 4 logged out. Waiting for processes to exit. Jan 23 19:26:37.568531 systemd[1]: Started sshd@4-10.0.0.128:22-10.0.0.1:40514.service - OpenSSH per-connection server daemon (10.0.0.1:40514). Jan 23 19:26:37.572819 systemd-logind[1534]: Removed session 4. Jan 23 19:26:37.984434 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 40514 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:26:37.987789 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:26:38.022689 systemd-logind[1534]: New session 5 of user core. Jan 23 19:26:38.034007 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 19:26:38.141552 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 19:26:38.142050 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:26:38.181010 sudo[1710]: pam_unix(sudo:session): session closed for user root Jan 23 19:26:38.196227 sshd[1708]: Connection closed by 10.0.0.1 port 40514 Jan 23 19:26:38.197587 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Jan 23 19:26:38.230996 systemd[1]: sshd@4-10.0.0.128:22-10.0.0.1:40514.service: Deactivated successfully. Jan 23 19:26:38.233591 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 19:26:38.238997 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. Jan 23 19:26:38.240222 systemd[1]: Started sshd@5-10.0.0.128:22-10.0.0.1:40526.service - OpenSSH per-connection server daemon (10.0.0.1:40526). Jan 23 19:26:38.245252 systemd-logind[1534]: Removed session 5. Jan 23 19:26:38.345842 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 40526 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:26:38.348415 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:26:38.358514 systemd-logind[1534]: New session 6 of user core. Jan 23 19:26:38.374949 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 19:26:38.460602 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 19:26:38.463683 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:26:38.513664 sudo[1721]: pam_unix(sudo:session): session closed for user root Jan 23 19:26:38.523254 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 19:26:38.524437 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:26:38.546703 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:26:38.565520 kubelet[1666]: E0123 19:26:38.565225 1666 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:26:38.570163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:26:38.570495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:26:38.571083 systemd[1]: kubelet.service: Consumed 2.133s CPU time, 269.1M memory peak. Jan 23 19:26:38.631495 augenrules[1744]: No rules Jan 23 19:26:38.635194 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:26:38.635701 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:26:38.637393 sudo[1720]: pam_unix(sudo:session): session closed for user root Jan 23 19:26:38.643931 sshd[1719]: Connection closed by 10.0.0.1 port 40526 Jan 23 19:26:38.641697 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Jan 23 19:26:38.657790 systemd[1]: sshd@5-10.0.0.128:22-10.0.0.1:40526.service: Deactivated successfully. Jan 23 19:26:38.660106 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 19:26:38.662724 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. Jan 23 19:26:38.668883 systemd[1]: Started sshd@6-10.0.0.128:22-10.0.0.1:40534.service - OpenSSH per-connection server daemon (10.0.0.1:40534). Jan 23 19:26:38.669904 systemd-logind[1534]: Removed session 6. Jan 23 19:26:38.748917 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 40534 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:26:38.750711 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:26:38.762405 systemd-logind[1534]: New session 7 of user core. Jan 23 19:26:38.780708 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 19:26:38.855437 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 19:26:38.857224 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:26:39.661964 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 19:26:39.689139 (dockerd)[1777]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 19:26:41.239845 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1025035819 wd_nsec: 1025035533 Jan 23 19:26:43.474169 dockerd[1777]: time="2026-01-23T19:26:43.473244294Z" level=info msg="Starting up" Jan 23 19:26:43.489419 dockerd[1777]: time="2026-01-23T19:26:43.488361280Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 19:26:43.706968 dockerd[1777]: time="2026-01-23T19:26:43.705952151Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 19:26:44.066720 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3302467082-merged.mount: Deactivated successfully. Jan 23 19:26:44.317658 dockerd[1777]: time="2026-01-23T19:26:44.316991383Z" level=info msg="Loading containers: start." Jan 23 19:26:44.390909 kernel: Initializing XFRM netlink socket Jan 23 19:26:46.025565 systemd-networkd[1457]: docker0: Link UP Jan 23 19:26:46.058061 dockerd[1777]: time="2026-01-23T19:26:46.056869982Z" level=info msg="Loading containers: done." Jan 23 19:26:46.305681 dockerd[1777]: time="2026-01-23T19:26:46.305067420Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 19:26:46.305681 dockerd[1777]: time="2026-01-23T19:26:46.305535063Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 19:26:46.305681 dockerd[1777]: time="2026-01-23T19:26:46.305674553Z" level=info msg="Initializing buildkit" Jan 23 19:26:46.519595 dockerd[1777]: time="2026-01-23T19:26:46.516734733Z" level=info msg="Completed buildkit initialization" Jan 23 19:26:46.581682 dockerd[1777]: time="2026-01-23T19:26:46.580664699Z" level=info msg="Daemon has completed initialization" Jan 23 19:26:46.612489 dockerd[1777]: time="2026-01-23T19:26:46.611321170Z" level=info msg="API listen on /run/docker.sock" Jan 23 19:26:46.614640 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 19:26:48.840503 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 19:26:48.874075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:26:50.231854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:26:50.269170 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:26:51.228761 kubelet[2004]: E0123 19:26:51.226823 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:26:51.300128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:26:51.341548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:26:51.362103 systemd[1]: kubelet.service: Consumed 1.165s CPU time, 114.5M memory peak. Jan 23 19:26:53.387731 containerd[1548]: time="2026-01-23T19:26:53.386592762Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 19:26:55.945076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1555757323.mount: Deactivated successfully. Jan 23 19:27:01.478694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 19:27:01.494369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:27:03.255412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:27:03.322707 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:27:04.654764 kubelet[2079]: E0123 19:27:04.653745 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:27:04.662226 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:27:04.662570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:27:04.663184 systemd[1]: kubelet.service: Consumed 1.848s CPU time, 111.4M memory peak. Jan 23 19:27:09.136237 containerd[1548]: time="2026-01-23T19:27:09.135989931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:09.140377 containerd[1548]: time="2026-01-23T19:27:09.140178172Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 23 19:27:09.142662 containerd[1548]: time="2026-01-23T19:27:09.142564472Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:09.153126 containerd[1548]: time="2026-01-23T19:27:09.151097175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:09.153126 containerd[1548]: time="2026-01-23T19:27:09.152220584Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 15.76556106s" Jan 23 19:27:09.153126 containerd[1548]: time="2026-01-23T19:27:09.152354923Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 19:27:09.159107 containerd[1548]: time="2026-01-23T19:27:09.156824158Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 19:27:14.743058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 19:27:14.860983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:27:16.200650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:27:16.247807 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:27:16.584927 kubelet[2100]: E0123 19:27:16.584701 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:27:16.600996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:27:16.602024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:27:16.611583 systemd[1]: kubelet.service: Consumed 1.230s CPU time, 113.2M memory peak. Jan 23 19:27:19.628132 update_engine[1538]: I20260123 19:27:19.627099 1538 update_attempter.cc:509] Updating boot flags... Jan 23 19:27:23.051157 containerd[1548]: time="2026-01-23T19:27:23.049906515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:23.053758 containerd[1548]: time="2026-01-23T19:27:23.053347526Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 23 19:27:23.055987 containerd[1548]: time="2026-01-23T19:27:23.055841189Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:23.063454 containerd[1548]: time="2026-01-23T19:27:23.063232100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:23.065499 containerd[1548]: time="2026-01-23T19:27:23.064110513Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 13.907222768s" Jan 23 19:27:23.065499 containerd[1548]: time="2026-01-23T19:27:23.064182747Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 19:27:23.069486 containerd[1548]: time="2026-01-23T19:27:23.067992846Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 19:27:26.727535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 19:27:26.735050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:27:27.571681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:27:27.605458 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:27:29.174625 kubelet[2137]: E0123 19:27:29.173015 2137 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:27:29.184062 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:27:29.184479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:27:29.185389 systemd[1]: kubelet.service: Consumed 1.265s CPU time, 109.1M memory peak. Jan 23 19:27:32.574681 containerd[1548]: time="2026-01-23T19:27:32.574434580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:32.582585 containerd[1548]: time="2026-01-23T19:27:32.581803465Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 23 19:27:32.586372 containerd[1548]: time="2026-01-23T19:27:32.586211661Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:32.646190 containerd[1548]: time="2026-01-23T19:27:32.642466427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:33.002398 containerd[1548]: time="2026-01-23T19:27:33.001963985Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 9.932368853s" Jan 23 19:27:33.002398 containerd[1548]: time="2026-01-23T19:27:33.002146075Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 19:27:33.036453 containerd[1548]: time="2026-01-23T19:27:33.035186927Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 19:27:39.240158 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 19:27:39.272466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:27:40.234419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount575688843.mount: Deactivated successfully. Jan 23 19:27:41.238531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:27:41.264051 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:27:41.479781 kubelet[2162]: E0123 19:27:41.478832 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:27:41.484223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:27:41.485209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:27:41.487439 systemd[1]: kubelet.service: Consumed 1.328s CPU time, 111.2M memory peak. Jan 23 19:27:49.206443 containerd[1548]: time="2026-01-23T19:27:49.205704832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:49.213880 containerd[1548]: time="2026-01-23T19:27:49.213586049Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 23 19:27:49.218203 containerd[1548]: time="2026-01-23T19:27:49.216863392Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:49.223813 containerd[1548]: time="2026-01-23T19:27:49.223563034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:27:49.225330 containerd[1548]: time="2026-01-23T19:27:49.225218184Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 16.189694129s" Jan 23 19:27:49.225412 containerd[1548]: time="2026-01-23T19:27:49.225360469Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 19:27:49.233892 containerd[1548]: time="2026-01-23T19:27:49.232384676Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 19:27:50.444469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1117109775.mount: Deactivated successfully. Jan 23 19:27:52.321396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 23 19:27:52.329441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:27:52.863187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:27:52.892149 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:27:53.770365 kubelet[2200]: E0123 19:27:53.770221 2200 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:27:53.782178 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:27:53.784502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:27:53.785993 systemd[1]: kubelet.service: Consumed 730ms CPU time, 110.6M memory peak. Jan 23 19:28:01.471554 containerd[1548]: time="2026-01-23T19:28:01.469633608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:01.477172 containerd[1548]: time="2026-01-23T19:28:01.477021583Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 23 19:28:01.482024 containerd[1548]: time="2026-01-23T19:28:01.481818101Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:01.501239 containerd[1548]: time="2026-01-23T19:28:01.498137504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:01.501476 containerd[1548]: time="2026-01-23T19:28:01.501427230Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 12.268988263s" Jan 23 19:28:01.501476 containerd[1548]: time="2026-01-23T19:28:01.501471003Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 19:28:01.511038 containerd[1548]: time="2026-01-23T19:28:01.506450764Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 19:28:03.148678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663774588.mount: Deactivated successfully. Jan 23 19:28:03.176010 containerd[1548]: time="2026-01-23T19:28:03.175726937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:28:03.177996 containerd[1548]: time="2026-01-23T19:28:03.177458644Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 19:28:03.186538 containerd[1548]: time="2026-01-23T19:28:03.185802522Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:28:03.243448 containerd[1548]: time="2026-01-23T19:28:03.242793932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:28:03.257586 containerd[1548]: time="2026-01-23T19:28:03.257205361Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.750701028s" Jan 23 19:28:03.260625 containerd[1548]: time="2026-01-23T19:28:03.258340230Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 19:28:03.264625 containerd[1548]: time="2026-01-23T19:28:03.262768202Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 19:28:03.973891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 23 19:28:04.006669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:28:04.017064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2790288440.mount: Deactivated successfully. Jan 23 19:28:05.560551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:28:05.590945 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:28:05.860250 kubelet[2264]: E0123 19:28:05.857066 2264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:28:05.870695 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:28:05.870966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:28:05.871656 systemd[1]: kubelet.service: Consumed 1.153s CPU time, 108.6M memory peak. Jan 23 19:28:16.041786 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 23 19:28:16.080630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:28:17.194508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:28:17.231966 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:28:18.272872 kubelet[2320]: E0123 19:28:18.270999 2320 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:28:18.282160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:28:18.282543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:28:18.284848 systemd[1]: kubelet.service: Consumed 1.433s CPU time, 108.4M memory peak. Jan 23 19:28:20.178965 containerd[1548]: time="2026-01-23T19:28:20.177653881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:20.184537 containerd[1548]: time="2026-01-23T19:28:20.183537307Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 23 19:28:20.186931 containerd[1548]: time="2026-01-23T19:28:20.186799093Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:20.193801 containerd[1548]: time="2026-01-23T19:28:20.193195597Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 16.930291581s" Jan 23 19:28:20.193801 containerd[1548]: time="2026-01-23T19:28:20.193354432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 19:28:20.193801 containerd[1548]: time="2026-01-23T19:28:20.193635678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:28:28.474657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 23 19:28:28.493462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:28:29.301668 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 19:28:29.302485 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 19:28:29.308632 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:28:29.322710 systemd[1]: kubelet.service: Consumed 255ms CPU time, 74.3M memory peak. Jan 23 19:28:29.333538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:28:29.538258 systemd[1]: Reload requested from client PID 2367 ('systemctl') (unit session-7.scope)... Jan 23 19:28:29.538437 systemd[1]: Reloading... Jan 23 19:28:30.031401 zram_generator::config[2413]: No configuration found. Jan 23 19:28:30.742496 systemd[1]: Reloading finished in 1202 ms. Jan 23 19:28:30.925139 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 19:28:30.925572 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 19:28:30.929584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:28:30.929689 systemd[1]: kubelet.service: Consumed 306ms CPU time, 98.4M memory peak. Jan 23 19:28:30.954427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:28:31.676113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:28:31.708621 (kubelet)[2459]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:28:32.638479 kubelet[2459]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:28:32.638479 kubelet[2459]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:28:32.638479 kubelet[2459]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:28:32.638479 kubelet[2459]: I0123 19:28:32.638248 2459 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:28:33.299710 kubelet[2459]: I0123 19:28:33.298619 2459 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 19:28:33.299710 kubelet[2459]: I0123 19:28:33.298680 2459 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:28:33.301247 kubelet[2459]: I0123 19:28:33.301150 2459 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 19:28:33.840521 kubelet[2459]: E0123 19:28:33.840415 2459 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 19:28:33.854788 kubelet[2459]: I0123 19:28:33.853097 2459 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:28:33.887337 kubelet[2459]: I0123 19:28:33.886376 2459 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:28:33.955092 kubelet[2459]: I0123 19:28:33.953217 2459 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 19:28:33.958758 kubelet[2459]: I0123 19:28:33.958449 2459 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:28:33.959822 kubelet[2459]: I0123 19:28:33.958519 2459 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:28:33.959822 kubelet[2459]: I0123 19:28:33.959212 2459 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:28:33.959822 kubelet[2459]: I0123 19:28:33.959233 2459 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 19:28:33.959822 kubelet[2459]: I0123 19:28:33.959563 2459 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:28:33.985084 kubelet[2459]: I0123 19:28:33.982028 2459 kubelet.go:480] "Attempting to sync node with API server" Jan 23 19:28:33.985084 kubelet[2459]: I0123 19:28:33.982113 2459 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:28:33.985084 kubelet[2459]: I0123 19:28:33.982157 2459 kubelet.go:386] "Adding apiserver pod source" Jan 23 19:28:33.985084 kubelet[2459]: I0123 19:28:33.982231 2459 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:28:33.993474 kubelet[2459]: E0123 19:28:33.993364 2459 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 19:28:33.998426 kubelet[2459]: E0123 19:28:33.998256 2459 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 19:28:34.359235 kubelet[2459]: I0123 19:28:34.358103 2459 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:28:34.361049 kubelet[2459]: I0123 19:28:34.361026 2459 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 19:28:34.363214 kubelet[2459]: W0123 19:28:34.363189 2459 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 19:28:34.370975 kubelet[2459]: I0123 19:28:34.370904 2459 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 19:28:34.377168 kubelet[2459]: I0123 19:28:34.373501 2459 server.go:1289] "Started kubelet" Jan 23 19:28:34.377168 kubelet[2459]: I0123 19:28:34.375837 2459 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:28:34.377168 kubelet[2459]: I0123 19:28:34.376475 2459 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:28:34.377168 kubelet[2459]: I0123 19:28:34.376517 2459 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:28:34.378699 kubelet[2459]: I0123 19:28:34.376472 2459 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:28:34.388324 kubelet[2459]: I0123 19:28:34.385130 2459 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 19:28:34.388324 kubelet[2459]: E0123 19:28:34.385599 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:34.388324 kubelet[2459]: I0123 19:28:34.386417 2459 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 19:28:34.388324 kubelet[2459]: I0123 19:28:34.386488 2459 server.go:317] "Adding debug handlers to kubelet server" Jan 23 19:28:34.388324 kubelet[2459]: I0123 19:28:34.386546 2459 reconciler.go:26] "Reconciler: start to sync state" Jan 23 19:28:34.389333 kubelet[2459]: E0123 19:28:34.389195 2459 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 19:28:34.393130 kubelet[2459]: I0123 19:28:34.389841 2459 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:28:34.393130 kubelet[2459]: E0123 19:28:34.389422 2459 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d72db30f09781 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 19:28:34.370951041 +0000 UTC m=+2.632208410,LastTimestamp:2026-01-23 19:28:34.370951041 +0000 UTC m=+2.632208410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 19:28:34.393130 kubelet[2459]: E0123 19:28:34.390884 2459 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="200ms" Jan 23 19:28:34.393130 kubelet[2459]: I0123 19:28:34.391387 2459 factory.go:223] Registration of the systemd container factory successfully Jan 23 19:28:34.393130 kubelet[2459]: I0123 19:28:34.391494 2459 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:28:34.397860 kubelet[2459]: E0123 19:28:34.396457 2459 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 19:28:34.397860 kubelet[2459]: I0123 19:28:34.397327 2459 factory.go:223] Registration of the containerd container factory successfully Jan 23 19:28:34.830780 kubelet[2459]: E0123 19:28:34.830691 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:34.831242 kubelet[2459]: I0123 19:28:34.831053 2459 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 19:28:34.834197 kubelet[2459]: E0123 19:28:34.834056 2459 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="400ms" Jan 23 19:28:34.844468 kubelet[2459]: I0123 19:28:34.843872 2459 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:28:34.846823 kubelet[2459]: I0123 19:28:34.846580 2459 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:28:34.847313 kubelet[2459]: I0123 19:28:34.845823 2459 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 19:28:34.847401 kubelet[2459]: I0123 19:28:34.847384 2459 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:28:34.852644 kubelet[2459]: I0123 19:28:34.847499 2459 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 19:28:34.852644 kubelet[2459]: I0123 19:28:34.848607 2459 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:28:34.852644 kubelet[2459]: I0123 19:28:34.848623 2459 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 19:28:34.852644 kubelet[2459]: E0123 19:28:34.848831 2459 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 19:28:34.853697 kubelet[2459]: E0123 19:28:34.853665 2459 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 19:28:34.932642 kubelet[2459]: E0123 19:28:34.932524 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:34.975353 kubelet[2459]: I0123 19:28:34.953803 2459 policy_none.go:49] "None policy: Start" Jan 23 19:28:34.975353 kubelet[2459]: E0123 19:28:34.966083 2459 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:28:34.975353 kubelet[2459]: I0123 19:28:34.972773 2459 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 19:28:34.975353 kubelet[2459]: I0123 19:28:34.972836 2459 state_mem.go:35] "Initializing new in-memory state store" Jan 23 19:28:35.022113 kubelet[2459]: E0123 19:28:35.021899 2459 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 19:28:35.035243 kubelet[2459]: E0123 19:28:35.033589 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:35.035059 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 19:28:35.143137 kubelet[2459]: E0123 19:28:35.141031 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:35.175827 kubelet[2459]: E0123 19:28:35.172258 2459 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:28:35.175510 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 19:28:35.205977 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 19:28:35.241720 kubelet[2459]: E0123 19:28:35.241555 2459 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="800ms" Jan 23 19:28:35.243145 kubelet[2459]: E0123 19:28:35.242972 2459 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 19:28:35.244441 kubelet[2459]: E0123 19:28:35.244200 2459 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 19:28:35.251452 kubelet[2459]: E0123 19:28:35.251231 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:35.258313 kubelet[2459]: E0123 19:28:35.258133 2459 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 19:28:35.261776 kubelet[2459]: I0123 19:28:35.261442 2459 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:28:35.261776 kubelet[2459]: I0123 19:28:35.261462 2459 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:28:35.263165 kubelet[2459]: I0123 19:28:35.261943 2459 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:28:35.271469 kubelet[2459]: E0123 19:28:35.271395 2459 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:28:35.271862 kubelet[2459]: E0123 19:28:35.271635 2459 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 19:28:35.369526 kubelet[2459]: I0123 19:28:35.366192 2459 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:28:35.370117 kubelet[2459]: E0123 19:28:35.369894 2459 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jan 23 19:28:35.775229 kubelet[2459]: I0123 19:28:35.774844 2459 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:28:35.777073 kubelet[2459]: E0123 19:28:35.776944 2459 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jan 23 19:28:35.796775 kubelet[2459]: I0123 19:28:35.796740 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:28:35.797086 kubelet[2459]: I0123 19:28:35.797061 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:28:35.797214 kubelet[2459]: I0123 19:28:35.797195 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90d2eb79e7fa5757fc0f3149aa26c471-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"90d2eb79e7fa5757fc0f3149aa26c471\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:28:35.797388 kubelet[2459]: I0123 19:28:35.797369 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90d2eb79e7fa5757fc0f3149aa26c471-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"90d2eb79e7fa5757fc0f3149aa26c471\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:28:35.797472 kubelet[2459]: I0123 19:28:35.797455 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90d2eb79e7fa5757fc0f3149aa26c471-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"90d2eb79e7fa5757fc0f3149aa26c471\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:28:35.797565 kubelet[2459]: I0123 19:28:35.797548 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:28:35.797644 kubelet[2459]: I0123 19:28:35.797629 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:28:35.797719 kubelet[2459]: I0123 19:28:35.797705 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:28:35.800742 systemd[1]: Created slice kubepods-burstable-pod90d2eb79e7fa5757fc0f3149aa26c471.slice - libcontainer container kubepods-burstable-pod90d2eb79e7fa5757fc0f3149aa26c471.slice. Jan 23 19:28:35.873665 kubelet[2459]: E0123 19:28:35.873156 2459 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:28:35.880454 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 23 19:28:35.901146 kubelet[2459]: I0123 19:28:35.898888 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 19:28:35.902877 kubelet[2459]: E0123 19:28:35.902245 2459 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:28:35.902979 kubelet[2459]: E0123 19:28:35.902928 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:35.931661 containerd[1548]: time="2026-01-23T19:28:35.931169784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 23 19:28:35.935834 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 23 19:28:35.946088 kubelet[2459]: E0123 19:28:35.945528 2459 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:28:36.064065 kubelet[2459]: E0123 19:28:36.054886 2459 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 19:28:36.074908 kubelet[2459]: E0123 19:28:36.054941 2459 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="1.6s" Jan 23 19:28:36.177497 kubelet[2459]: E0123 19:28:36.177426 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:36.182124 containerd[1548]: time="2026-01-23T19:28:36.179668441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:90d2eb79e7fa5757fc0f3149aa26c471,Namespace:kube-system,Attempt:0,}" Jan 23 19:28:36.182253 kubelet[2459]: I0123 19:28:36.180198 2459 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:28:36.182253 kubelet[2459]: E0123 19:28:36.181714 2459 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jan 23 19:28:36.217757 containerd[1548]: time="2026-01-23T19:28:36.217613175Z" level=info msg="connecting to shim e3dc99c3375444c96d6e70336e71368daa46984bf4b62963a7934080310bef57" address="unix:///run/containerd/s/2d4cef365fd69e2044ef5b8a8914a6e6a7013d31a50614f8b8a8412f4ac043d8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:28:36.259101 kubelet[2459]: E0123 19:28:36.254529 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:36.301422 containerd[1548]: time="2026-01-23T19:28:36.300772128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 23 19:28:36.332612 kubelet[2459]: E0123 19:28:36.331355 2459 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 19:28:36.436666 containerd[1548]: time="2026-01-23T19:28:36.436254433Z" level=info msg="connecting to shim 05c1c3c5ecd633a0de05562c350ec6cdac6edf0288cc9e027b03929433ca8b78" address="unix:///run/containerd/s/20496462a3e165c88ee29fd7d2a159a88239c2b82cec1ab983cb0f35c84dd4b6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:28:36.526871 containerd[1548]: time="2026-01-23T19:28:36.526652668Z" level=info msg="connecting to shim 6272c746af367a0d286031f4167070f5d920eb382bf226d9ce8c7bedb2428dd3" address="unix:///run/containerd/s/74bcf080eb6875950160a1ddf2b69f852379d42a2386c6fd28cc3a0383f57672" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:28:36.640142 systemd[1]: Started cri-containerd-05c1c3c5ecd633a0de05562c350ec6cdac6edf0288cc9e027b03929433ca8b78.scope - libcontainer container 05c1c3c5ecd633a0de05562c350ec6cdac6edf0288cc9e027b03929433ca8b78. Jan 23 19:28:36.649591 systemd[1]: Started cri-containerd-e3dc99c3375444c96d6e70336e71368daa46984bf4b62963a7934080310bef57.scope - libcontainer container e3dc99c3375444c96d6e70336e71368daa46984bf4b62963a7934080310bef57. Jan 23 19:28:36.718026 systemd[1]: Started cri-containerd-6272c746af367a0d286031f4167070f5d920eb382bf226d9ce8c7bedb2428dd3.scope - libcontainer container 6272c746af367a0d286031f4167070f5d920eb382bf226d9ce8c7bedb2428dd3. Jan 23 19:28:36.893675 containerd[1548]: time="2026-01-23T19:28:36.892771223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:90d2eb79e7fa5757fc0f3149aa26c471,Namespace:kube-system,Attempt:0,} returns sandbox id \"05c1c3c5ecd633a0de05562c350ec6cdac6edf0288cc9e027b03929433ca8b78\"" Jan 23 19:28:36.895838 kubelet[2459]: E0123 19:28:36.895329 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:36.900601 containerd[1548]: time="2026-01-23T19:28:36.899959333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3dc99c3375444c96d6e70336e71368daa46984bf4b62963a7934080310bef57\"" Jan 23 19:28:36.904500 kubelet[2459]: E0123 19:28:36.902798 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:36.924095 containerd[1548]: time="2026-01-23T19:28:36.924049237Z" level=info msg="CreateContainer within sandbox \"05c1c3c5ecd633a0de05562c350ec6cdac6edf0288cc9e027b03929433ca8b78\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 19:28:36.937335 containerd[1548]: time="2026-01-23T19:28:36.935802485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6272c746af367a0d286031f4167070f5d920eb382bf226d9ce8c7bedb2428dd3\"" Jan 23 19:28:36.937827 kubelet[2459]: E0123 19:28:36.936934 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:36.937884 containerd[1548]: time="2026-01-23T19:28:36.937582689Z" level=info msg="CreateContainer within sandbox \"e3dc99c3375444c96d6e70336e71368daa46984bf4b62963a7934080310bef57\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 19:28:36.952145 kubelet[2459]: E0123 19:28:36.951811 2459 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 19:28:36.957870 containerd[1548]: time="2026-01-23T19:28:36.957089511Z" level=info msg="CreateContainer within sandbox \"6272c746af367a0d286031f4167070f5d920eb382bf226d9ce8c7bedb2428dd3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 19:28:36.988349 kubelet[2459]: I0123 19:28:36.985803 2459 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:28:36.988349 kubelet[2459]: E0123 19:28:36.986228 2459 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jan 23 19:28:37.029312 containerd[1548]: time="2026-01-23T19:28:37.026944235Z" level=info msg="Container 1a01b9279ab14e243b11c69b33a0747e98d0f48a632e27d90840c2dad1b2de06: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:28:37.029312 containerd[1548]: time="2026-01-23T19:28:37.027012269Z" level=info msg="Container 637117f5780811e2727eef24bc4e1132e6cee8c776c38d7c1318d142a75008be: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:28:37.056590 containerd[1548]: time="2026-01-23T19:28:37.056148895Z" level=info msg="Container fbc71c2614684cdaf8d0e4b8b4175a59f2475769ae8881aeab8e178486ea1b65: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:28:37.068103 containerd[1548]: time="2026-01-23T19:28:37.067964606Z" level=info msg="CreateContainer within sandbox \"05c1c3c5ecd633a0de05562c350ec6cdac6edf0288cc9e027b03929433ca8b78\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1a01b9279ab14e243b11c69b33a0747e98d0f48a632e27d90840c2dad1b2de06\"" Jan 23 19:28:37.069423 containerd[1548]: time="2026-01-23T19:28:37.069391304Z" level=info msg="StartContainer for \"1a01b9279ab14e243b11c69b33a0747e98d0f48a632e27d90840c2dad1b2de06\"" Jan 23 19:28:37.081868 containerd[1548]: time="2026-01-23T19:28:37.081813128Z" level=info msg="connecting to shim 1a01b9279ab14e243b11c69b33a0747e98d0f48a632e27d90840c2dad1b2de06" address="unix:///run/containerd/s/20496462a3e165c88ee29fd7d2a159a88239c2b82cec1ab983cb0f35c84dd4b6" protocol=ttrpc version=3 Jan 23 19:28:37.113116 containerd[1548]: time="2026-01-23T19:28:37.113039024Z" level=info msg="CreateContainer within sandbox \"6272c746af367a0d286031f4167070f5d920eb382bf226d9ce8c7bedb2428dd3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fbc71c2614684cdaf8d0e4b8b4175a59f2475769ae8881aeab8e178486ea1b65\"" Jan 23 19:28:37.114138 containerd[1548]: time="2026-01-23T19:28:37.113793043Z" level=info msg="CreateContainer within sandbox \"e3dc99c3375444c96d6e70336e71368daa46984bf4b62963a7934080310bef57\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"637117f5780811e2727eef24bc4e1132e6cee8c776c38d7c1318d142a75008be\"" Jan 23 19:28:37.119681 containerd[1548]: time="2026-01-23T19:28:37.115727047Z" level=info msg="StartContainer for \"fbc71c2614684cdaf8d0e4b8b4175a59f2475769ae8881aeab8e178486ea1b65\"" Jan 23 19:28:37.120088 containerd[1548]: time="2026-01-23T19:28:37.120058933Z" level=info msg="StartContainer for \"637117f5780811e2727eef24bc4e1132e6cee8c776c38d7c1318d142a75008be\"" Jan 23 19:28:37.121915 containerd[1548]: time="2026-01-23T19:28:37.121835861Z" level=info msg="connecting to shim 637117f5780811e2727eef24bc4e1132e6cee8c776c38d7c1318d142a75008be" address="unix:///run/containerd/s/2d4cef365fd69e2044ef5b8a8914a6e6a7013d31a50614f8b8a8412f4ac043d8" protocol=ttrpc version=3 Jan 23 19:28:37.130399 containerd[1548]: time="2026-01-23T19:28:37.125436707Z" level=info msg="connecting to shim fbc71c2614684cdaf8d0e4b8b4175a59f2475769ae8881aeab8e178486ea1b65" address="unix:///run/containerd/s/74bcf080eb6875950160a1ddf2b69f852379d42a2386c6fd28cc3a0383f57672" protocol=ttrpc version=3 Jan 23 19:28:37.212017 systemd[1]: Started cri-containerd-1a01b9279ab14e243b11c69b33a0747e98d0f48a632e27d90840c2dad1b2de06.scope - libcontainer container 1a01b9279ab14e243b11c69b33a0747e98d0f48a632e27d90840c2dad1b2de06. Jan 23 19:28:37.250620 systemd[1]: Started cri-containerd-fbc71c2614684cdaf8d0e4b8b4175a59f2475769ae8881aeab8e178486ea1b65.scope - libcontainer container fbc71c2614684cdaf8d0e4b8b4175a59f2475769ae8881aeab8e178486ea1b65. Jan 23 19:28:37.306841 systemd[1]: Started cri-containerd-637117f5780811e2727eef24bc4e1132e6cee8c776c38d7c1318d142a75008be.scope - libcontainer container 637117f5780811e2727eef24bc4e1132e6cee8c776c38d7c1318d142a75008be. Jan 23 19:28:37.334181 kubelet[2459]: E0123 19:28:37.333946 2459 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 19:28:37.548762 containerd[1548]: time="2026-01-23T19:28:37.548219332Z" level=info msg="StartContainer for \"fbc71c2614684cdaf8d0e4b8b4175a59f2475769ae8881aeab8e178486ea1b65\" returns successfully" Jan 23 19:28:37.571086 containerd[1548]: time="2026-01-23T19:28:37.570652217Z" level=info msg="StartContainer for \"1a01b9279ab14e243b11c69b33a0747e98d0f48a632e27d90840c2dad1b2de06\" returns successfully" Jan 23 19:28:37.763215 kubelet[2459]: E0123 19:28:37.762464 2459 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="3.2s" Jan 23 19:28:37.951119 containerd[1548]: time="2026-01-23T19:28:37.950680632Z" level=info msg="StartContainer for \"637117f5780811e2727eef24bc4e1132e6cee8c776c38d7c1318d142a75008be\" returns successfully" Jan 23 19:28:37.980591 kubelet[2459]: E0123 19:28:37.980175 2459 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:28:37.983533 kubelet[2459]: E0123 19:28:37.980654 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:37.983533 kubelet[2459]: E0123 19:28:37.981968 2459 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:28:37.983533 kubelet[2459]: E0123 19:28:37.982137 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:38.647678 kubelet[2459]: I0123 19:28:38.646468 2459 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:28:39.361760 kubelet[2459]: E0123 19:28:39.361623 2459 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:28:39.370086 kubelet[2459]: E0123 19:28:39.366941 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:39.372746 kubelet[2459]: E0123 19:28:39.372400 2459 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:28:39.372746 kubelet[2459]: E0123 19:28:39.372564 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:39.373332 kubelet[2459]: E0123 19:28:39.373256 2459 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:28:39.373704 kubelet[2459]: E0123 19:28:39.373623 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:40.793185 kubelet[2459]: E0123 19:28:40.792590 2459 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:28:40.793185 kubelet[2459]: E0123 19:28:40.793008 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:44.998055 kubelet[2459]: E0123 19:28:44.997765 2459 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:28:45.017753 kubelet[2459]: E0123 19:28:44.998502 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:45.347070 kubelet[2459]: E0123 19:28:45.345961 2459 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 19:28:46.091435 kubelet[2459]: E0123 19:28:46.090701 2459 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:28:46.102545 kubelet[2459]: E0123 19:28:46.092597 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:48.737491 kubelet[2459]: E0123 19:28:48.736860 2459 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 23 19:28:48.743112 kubelet[2459]: E0123 19:28:48.740119 2459 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 19:28:49.357952 kubelet[2459]: E0123 19:28:49.357446 2459 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 19:28:49.729477 kubelet[2459]: E0123 19:28:49.727385 2459 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 23 19:28:49.889641 kubelet[2459]: E0123 19:28:49.889372 2459 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:28:49.889641 kubelet[2459]: E0123 19:28:49.889707 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:49.917708 kubelet[2459]: E0123 19:28:49.917115 2459 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188d72db30f09781 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 19:28:34.370951041 +0000 UTC m=+2.632208410,LastTimestamp:2026-01-23 19:28:34.370951041 +0000 UTC m=+2.632208410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 19:28:51.232547 kubelet[2459]: E0123 19:28:51.232112 2459 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 23 19:28:52.055745 kubelet[2459]: I0123 19:28:52.050653 2459 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:28:52.079404 kubelet[2459]: I0123 19:28:52.077529 2459 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 19:28:52.079404 kubelet[2459]: E0123 19:28:52.077597 2459 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 23 19:28:52.159877 kubelet[2459]: E0123 19:28:52.158793 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:52.261130 kubelet[2459]: E0123 19:28:52.259774 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:52.361867 kubelet[2459]: E0123 19:28:52.360241 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:52.464403 kubelet[2459]: E0123 19:28:52.463772 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:52.570515 kubelet[2459]: E0123 19:28:52.570457 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:52.671367 kubelet[2459]: E0123 19:28:52.670810 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:52.771727 kubelet[2459]: E0123 19:28:52.771395 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:52.875340 kubelet[2459]: E0123 19:28:52.872949 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:52.975504 kubelet[2459]: E0123 19:28:52.975462 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:53.079755 kubelet[2459]: E0123 19:28:53.079403 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:53.181774 kubelet[2459]: E0123 19:28:53.180583 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:53.284159 kubelet[2459]: E0123 19:28:53.281423 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:53.382956 kubelet[2459]: E0123 19:28:53.382603 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:53.489084 kubelet[2459]: E0123 19:28:53.484682 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:53.586410 kubelet[2459]: E0123 19:28:53.585056 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:53.686539 kubelet[2459]: E0123 19:28:53.685617 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:53.787761 kubelet[2459]: E0123 19:28:53.787217 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:53.892402 kubelet[2459]: E0123 19:28:53.888674 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:53.990108 kubelet[2459]: E0123 19:28:53.990049 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:54.095233 kubelet[2459]: E0123 19:28:54.093582 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:54.200825 kubelet[2459]: E0123 19:28:54.195381 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:54.296915 kubelet[2459]: E0123 19:28:54.296487 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:54.398565 kubelet[2459]: E0123 19:28:54.398074 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:54.508740 kubelet[2459]: E0123 19:28:54.499856 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:54.600683 kubelet[2459]: E0123 19:28:54.600416 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:54.702255 kubelet[2459]: E0123 19:28:54.701762 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:54.817760 kubelet[2459]: E0123 19:28:54.805863 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:54.914237 kubelet[2459]: E0123 19:28:54.912718 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:55.012908 kubelet[2459]: E0123 19:28:55.012817 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:28:55.096507 kubelet[2459]: I0123 19:28:55.093052 2459 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 19:28:55.301961 kubelet[2459]: I0123 19:28:55.301357 2459 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 19:28:55.361522 kubelet[2459]: I0123 19:28:55.360111 2459 apiserver.go:52] "Watching apiserver" Jan 23 19:28:55.369451 kubelet[2459]: E0123 19:28:55.367630 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:55.408586 kubelet[2459]: I0123 19:28:55.401351 2459 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 19:28:55.432682 kubelet[2459]: E0123 19:28:55.432245 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:55.440090 kubelet[2459]: I0123 19:28:55.438893 2459 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 19:28:55.460330 kubelet[2459]: E0123 19:28:55.460212 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:56.146798 kubelet[2459]: E0123 19:28:56.146524 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:56.454477 kubelet[2459]: I0123 19:28:56.449224 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.449105488 podStartE2EDuration="1.449105488s" podCreationTimestamp="2026-01-23 19:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:28:56.377521195 +0000 UTC m=+24.638778595" watchObservedRunningTime="2026-01-23 19:28:56.449105488 +0000 UTC m=+24.710362867" Jan 23 19:28:56.534885 kubelet[2459]: I0123 19:28:56.534551 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.534519831 podStartE2EDuration="1.534519831s" podCreationTimestamp="2026-01-23 19:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:28:56.451818083 +0000 UTC m=+24.713075482" watchObservedRunningTime="2026-01-23 19:28:56.534519831 +0000 UTC m=+24.795777200" Jan 23 19:28:56.633149 kubelet[2459]: I0123 19:28:56.632924 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.632895186 podStartE2EDuration="1.632895186s" podCreationTimestamp="2026-01-23 19:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:28:56.540863528 +0000 UTC m=+24.802120897" watchObservedRunningTime="2026-01-23 19:28:56.632895186 +0000 UTC m=+24.894152566" Jan 23 19:29:00.472473 kubelet[2459]: E0123 19:29:00.472324 2459 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:00.938887 systemd[1]: Reload requested from client PID 2750 ('systemctl') (unit session-7.scope)... Jan 23 19:29:00.938941 systemd[1]: Reloading... Jan 23 19:29:01.422621 zram_generator::config[2796]: No configuration found. Jan 23 19:29:02.243696 systemd[1]: Reloading finished in 1302 ms. Jan 23 19:29:02.379087 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:29:02.449201 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 19:29:02.449918 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:29:02.450333 systemd[1]: kubelet.service: Consumed 6.787s CPU time, 134.6M memory peak. Jan 23 19:29:02.455378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:29:03.258231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:29:03.285714 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:29:03.530142 kubelet[2837]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:29:03.530142 kubelet[2837]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:29:03.530142 kubelet[2837]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:29:03.530142 kubelet[2837]: I0123 19:29:03.529164 2837 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:29:03.564093 kubelet[2837]: I0123 19:29:03.563806 2837 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 19:29:03.564093 kubelet[2837]: I0123 19:29:03.563868 2837 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:29:03.567547 kubelet[2837]: I0123 19:29:03.565104 2837 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 19:29:03.570528 kubelet[2837]: I0123 19:29:03.568993 2837 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 19:29:03.584547 kubelet[2837]: I0123 19:29:03.583083 2837 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:29:03.673514 kubelet[2837]: I0123 19:29:03.672918 2837 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:29:03.705559 kubelet[2837]: I0123 19:29:03.703619 2837 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 19:29:03.705559 kubelet[2837]: I0123 19:29:03.704702 2837 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:29:03.705798 kubelet[2837]: I0123 19:29:03.704743 2837 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:29:03.708789 kubelet[2837]: I0123 19:29:03.707832 2837 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:29:03.708789 kubelet[2837]: I0123 19:29:03.707853 2837 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 19:29:03.708789 kubelet[2837]: I0123 19:29:03.707921 2837 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:29:03.709218 kubelet[2837]: I0123 19:29:03.708843 2837 kubelet.go:480] "Attempting to sync node with API server" Jan 23 19:29:03.709218 kubelet[2837]: I0123 19:29:03.708863 2837 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:29:03.709218 kubelet[2837]: I0123 19:29:03.708896 2837 kubelet.go:386] "Adding apiserver pod source" Jan 23 19:29:03.709218 kubelet[2837]: I0123 19:29:03.708919 2837 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:29:03.730951 kubelet[2837]: I0123 19:29:03.729637 2837 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:29:03.735442 kubelet[2837]: I0123 19:29:03.734935 2837 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 19:29:03.743568 kubelet[2837]: I0123 19:29:03.743429 2837 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 19:29:03.744184 kubelet[2837]: I0123 19:29:03.743759 2837 server.go:1289] "Started kubelet" Jan 23 19:29:03.757477 kubelet[2837]: I0123 19:29:03.755557 2837 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:29:03.761124 kubelet[2837]: I0123 19:29:03.761018 2837 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:29:03.767445 kubelet[2837]: I0123 19:29:03.767195 2837 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:29:03.770504 kubelet[2837]: I0123 19:29:03.770437 2837 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:29:03.775336 kubelet[2837]: I0123 19:29:03.775007 2837 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:29:03.778536 kubelet[2837]: I0123 19:29:03.778152 2837 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 19:29:03.780510 kubelet[2837]: E0123 19:29:03.780359 2837 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:29:03.788340 kubelet[2837]: I0123 19:29:03.787924 2837 reconciler.go:26] "Reconciler: start to sync state" Jan 23 19:29:03.790982 kubelet[2837]: I0123 19:29:03.790913 2837 server.go:317] "Adding debug handlers to kubelet server" Jan 23 19:29:03.792927 kubelet[2837]: I0123 19:29:03.792811 2837 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 19:29:03.838348 kubelet[2837]: I0123 19:29:03.834479 2837 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:29:03.839219 kubelet[2837]: E0123 19:29:03.839174 2837 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 19:29:03.853590 kubelet[2837]: I0123 19:29:03.853530 2837 factory.go:223] Registration of the containerd container factory successfully Jan 23 19:29:03.853590 kubelet[2837]: I0123 19:29:03.853579 2837 factory.go:223] Registration of the systemd container factory successfully Jan 23 19:29:03.870011 kubelet[2837]: I0123 19:29:03.869693 2837 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 19:29:03.878561 kubelet[2837]: I0123 19:29:03.878479 2837 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 19:29:03.878561 kubelet[2837]: I0123 19:29:03.878576 2837 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 19:29:03.882152 kubelet[2837]: I0123 19:29:03.882079 2837 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:29:03.882152 kubelet[2837]: I0123 19:29:03.882123 2837 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 19:29:03.882646 kubelet[2837]: E0123 19:29:03.882188 2837 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 19:29:03.984340 kubelet[2837]: E0123 19:29:03.983893 2837 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:29:04.060905 kubelet[2837]: I0123 19:29:04.058241 2837 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:29:04.060905 kubelet[2837]: I0123 19:29:04.058397 2837 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:29:04.060905 kubelet[2837]: I0123 19:29:04.058470 2837 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:29:04.060905 kubelet[2837]: I0123 19:29:04.058646 2837 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 19:29:04.060905 kubelet[2837]: I0123 19:29:04.058655 2837 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 19:29:04.060905 kubelet[2837]: I0123 19:29:04.058672 2837 policy_none.go:49] "None policy: Start" Jan 23 19:29:04.060905 kubelet[2837]: I0123 19:29:04.058683 2837 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 19:29:04.060905 kubelet[2837]: I0123 19:29:04.058693 2837 state_mem.go:35] "Initializing new in-memory state store" Jan 23 19:29:04.060905 kubelet[2837]: I0123 19:29:04.058885 2837 state_mem.go:75] "Updated machine memory state" Jan 23 19:29:04.091101 kubelet[2837]: E0123 19:29:04.091013 2837 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 19:29:04.092145 kubelet[2837]: I0123 19:29:04.091635 2837 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:29:04.092145 kubelet[2837]: I0123 19:29:04.091656 2837 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:29:04.092145 kubelet[2837]: I0123 19:29:04.092072 2837 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:29:04.104764 kubelet[2837]: E0123 19:29:04.104575 2837 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:29:04.188581 kubelet[2837]: I0123 19:29:04.188381 2837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 19:29:04.191595 kubelet[2837]: I0123 19:29:04.191570 2837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 19:29:04.193685 kubelet[2837]: I0123 19:29:04.192469 2837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 19:29:04.232094 kubelet[2837]: E0123 19:29:04.231950 2837 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 23 19:29:04.239005 kubelet[2837]: E0123 19:29:04.238757 2837 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 23 19:29:04.239005 kubelet[2837]: E0123 19:29:04.238866 2837 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 23 19:29:04.241194 kubelet[2837]: I0123 19:29:04.240573 2837 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:29:04.276621 kubelet[2837]: I0123 19:29:04.276384 2837 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 19:29:04.276621 kubelet[2837]: I0123 19:29:04.276545 2837 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 19:29:04.308624 kubelet[2837]: I0123 19:29:04.302537 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 19:29:04.308624 kubelet[2837]: I0123 19:29:04.302651 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90d2eb79e7fa5757fc0f3149aa26c471-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"90d2eb79e7fa5757fc0f3149aa26c471\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:29:04.308624 kubelet[2837]: I0123 19:29:04.302684 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:29:04.308624 kubelet[2837]: I0123 19:29:04.302708 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:29:04.308624 kubelet[2837]: I0123 19:29:04.302740 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:29:04.316429 kubelet[2837]: I0123 19:29:04.302836 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:29:04.316429 kubelet[2837]: I0123 19:29:04.302858 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90d2eb79e7fa5757fc0f3149aa26c471-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"90d2eb79e7fa5757fc0f3149aa26c471\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:29:04.316429 kubelet[2837]: I0123 19:29:04.302880 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90d2eb79e7fa5757fc0f3149aa26c471-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"90d2eb79e7fa5757fc0f3149aa26c471\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:29:04.316429 kubelet[2837]: I0123 19:29:04.302901 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:29:04.535139 kubelet[2837]: E0123 19:29:04.534660 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:04.547648 kubelet[2837]: E0123 19:29:04.547352 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:04.547648 kubelet[2837]: E0123 19:29:04.547396 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:04.751911 kubelet[2837]: I0123 19:29:04.751736 2837 apiserver.go:52] "Watching apiserver" Jan 23 19:29:04.798812 kubelet[2837]: I0123 19:29:04.795966 2837 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 19:29:04.979535 kubelet[2837]: E0123 19:29:04.977625 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:04.979535 kubelet[2837]: E0123 19:29:04.978119 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:04.979535 kubelet[2837]: E0123 19:29:04.978492 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:06.178242 kubelet[2837]: E0123 19:29:06.178131 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:06.199807 kubelet[2837]: E0123 19:29:06.195819 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:06.333806 kubelet[2837]: I0123 19:29:06.329734 2837 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 19:29:06.367163 containerd[1548]: time="2026-01-23T19:29:06.363937614Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 19:29:06.367993 kubelet[2837]: I0123 19:29:06.366666 2837 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 19:29:06.964957 systemd[1]: Created slice kubepods-besteffort-poded40d8e6_cc42_4a2e_aa33_5b804e21cc0c.slice - libcontainer container kubepods-besteffort-poded40d8e6_cc42_4a2e_aa33_5b804e21cc0c.slice. Jan 23 19:29:07.032813 kubelet[2837]: I0123 19:29:07.025340 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed40d8e6-cc42-4a2e-aa33-5b804e21cc0c-kube-proxy\") pod \"kube-proxy-dnd6l\" (UID: \"ed40d8e6-cc42-4a2e-aa33-5b804e21cc0c\") " pod="kube-system/kube-proxy-dnd6l" Jan 23 19:29:07.032813 kubelet[2837]: I0123 19:29:07.025614 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgrj8\" (UniqueName: \"kubernetes.io/projected/ed40d8e6-cc42-4a2e-aa33-5b804e21cc0c-kube-api-access-mgrj8\") pod \"kube-proxy-dnd6l\" (UID: \"ed40d8e6-cc42-4a2e-aa33-5b804e21cc0c\") " pod="kube-system/kube-proxy-dnd6l" Jan 23 19:29:07.032813 kubelet[2837]: I0123 19:29:07.025725 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed40d8e6-cc42-4a2e-aa33-5b804e21cc0c-xtables-lock\") pod \"kube-proxy-dnd6l\" (UID: \"ed40d8e6-cc42-4a2e-aa33-5b804e21cc0c\") " pod="kube-system/kube-proxy-dnd6l" Jan 23 19:29:07.032813 kubelet[2837]: I0123 19:29:07.029364 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed40d8e6-cc42-4a2e-aa33-5b804e21cc0c-lib-modules\") pod \"kube-proxy-dnd6l\" (UID: \"ed40d8e6-cc42-4a2e-aa33-5b804e21cc0c\") " pod="kube-system/kube-proxy-dnd6l" Jan 23 19:29:07.505228 kubelet[2837]: E0123 19:29:07.502558 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:07.697878 kubelet[2837]: E0123 19:29:07.693531 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:07.698085 containerd[1548]: time="2026-01-23T19:29:07.694664958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dnd6l,Uid:ed40d8e6-cc42-4a2e-aa33-5b804e21cc0c,Namespace:kube-system,Attempt:0,}" Jan 23 19:29:07.852513 containerd[1548]: time="2026-01-23T19:29:07.848921564Z" level=info msg="connecting to shim 212c21e015393f0cb912a85ea026855767b1951843eab73fc4a42fa8e92f95f2" address="unix:///run/containerd/s/372bb14bcc2eb1a0660ea7e142f359295b02d30b2aad8c6d449d786e34936256" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:29:08.108796 systemd[1]: Started cri-containerd-212c21e015393f0cb912a85ea026855767b1951843eab73fc4a42fa8e92f95f2.scope - libcontainer container 212c21e015393f0cb912a85ea026855767b1951843eab73fc4a42fa8e92f95f2. Jan 23 19:29:08.221602 kubelet[2837]: E0123 19:29:08.215182 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:08.423629 containerd[1548]: time="2026-01-23T19:29:08.419379215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dnd6l,Uid:ed40d8e6-cc42-4a2e-aa33-5b804e21cc0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"212c21e015393f0cb912a85ea026855767b1951843eab73fc4a42fa8e92f95f2\"" Jan 23 19:29:08.430022 kubelet[2837]: E0123 19:29:08.428477 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:08.454813 containerd[1548]: time="2026-01-23T19:29:08.451725062Z" level=info msg="CreateContainer within sandbox \"212c21e015393f0cb912a85ea026855767b1951843eab73fc4a42fa8e92f95f2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 19:29:08.612349 containerd[1548]: time="2026-01-23T19:29:08.607657760Z" level=info msg="Container 68aec34b8ee041f74476dd94758e2a79135bcc4d8e55bb4b78fbeefa2a272f81: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:29:08.612507 kubelet[2837]: I0123 19:29:08.611550 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/65cf7775-ea7f-48a5-91da-5b6f2d69c29e-var-lib-calico\") pod \"tigera-operator-7dcd859c48-khtsm\" (UID: \"65cf7775-ea7f-48a5-91da-5b6f2d69c29e\") " pod="tigera-operator/tigera-operator-7dcd859c48-khtsm" Jan 23 19:29:08.612507 kubelet[2837]: I0123 19:29:08.611667 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk4qk\" (UniqueName: \"kubernetes.io/projected/65cf7775-ea7f-48a5-91da-5b6f2d69c29e-kube-api-access-hk4qk\") pod \"tigera-operator-7dcd859c48-khtsm\" (UID: \"65cf7775-ea7f-48a5-91da-5b6f2d69c29e\") " pod="tigera-operator/tigera-operator-7dcd859c48-khtsm" Jan 23 19:29:08.635996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924548757.mount: Deactivated successfully. Jan 23 19:29:08.643173 systemd[1]: Created slice kubepods-besteffort-pod65cf7775_ea7f_48a5_91da_5b6f2d69c29e.slice - libcontainer container kubepods-besteffort-pod65cf7775_ea7f_48a5_91da_5b6f2d69c29e.slice. Jan 23 19:29:08.699585 containerd[1548]: time="2026-01-23T19:29:08.698180788Z" level=info msg="CreateContainer within sandbox \"212c21e015393f0cb912a85ea026855767b1951843eab73fc4a42fa8e92f95f2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"68aec34b8ee041f74476dd94758e2a79135bcc4d8e55bb4b78fbeefa2a272f81\"" Jan 23 19:29:08.713589 containerd[1548]: time="2026-01-23T19:29:08.707017166Z" level=info msg="StartContainer for \"68aec34b8ee041f74476dd94758e2a79135bcc4d8e55bb4b78fbeefa2a272f81\"" Jan 23 19:29:08.727332 containerd[1548]: time="2026-01-23T19:29:08.726034947Z" level=info msg="connecting to shim 68aec34b8ee041f74476dd94758e2a79135bcc4d8e55bb4b78fbeefa2a272f81" address="unix:///run/containerd/s/372bb14bcc2eb1a0660ea7e142f359295b02d30b2aad8c6d449d786e34936256" protocol=ttrpc version=3 Jan 23 19:29:08.873195 systemd[1]: Started cri-containerd-68aec34b8ee041f74476dd94758e2a79135bcc4d8e55bb4b78fbeefa2a272f81.scope - libcontainer container 68aec34b8ee041f74476dd94758e2a79135bcc4d8e55bb4b78fbeefa2a272f81. Jan 23 19:29:09.193219 containerd[1548]: time="2026-01-23T19:29:09.169915600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-khtsm,Uid:65cf7775-ea7f-48a5-91da-5b6f2d69c29e,Namespace:tigera-operator,Attempt:0,}" Jan 23 19:29:09.393459 kubelet[2837]: E0123 19:29:09.383605 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:09.564476 containerd[1548]: time="2026-01-23T19:29:09.564155618Z" level=info msg="connecting to shim ef445ca20768cd4f7fb55b4f891f7a1623b9579edd7b2ea5030210250295a88a" address="unix:///run/containerd/s/cec3058fdcf8464b2f8ff60515a206eb27ce1e981258ac61644f49d190868cea" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:29:09.669504 systemd[1]: Started cri-containerd-ef445ca20768cd4f7fb55b4f891f7a1623b9579edd7b2ea5030210250295a88a.scope - libcontainer container ef445ca20768cd4f7fb55b4f891f7a1623b9579edd7b2ea5030210250295a88a. Jan 23 19:29:09.823840 containerd[1548]: time="2026-01-23T19:29:09.820770006Z" level=info msg="StartContainer for \"68aec34b8ee041f74476dd94758e2a79135bcc4d8e55bb4b78fbeefa2a272f81\" returns successfully" Jan 23 19:29:10.123839 containerd[1548]: time="2026-01-23T19:29:10.123133891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-khtsm,Uid:65cf7775-ea7f-48a5-91da-5b6f2d69c29e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ef445ca20768cd4f7fb55b4f891f7a1623b9579edd7b2ea5030210250295a88a\"" Jan 23 19:29:10.429591 containerd[1548]: time="2026-01-23T19:29:10.365124376Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 19:29:10.465454 kubelet[2837]: E0123 19:29:10.465176 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:11.490918 kubelet[2837]: E0123 19:29:11.490837 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:11.999420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount716349555.mount: Deactivated successfully. Jan 23 19:29:12.576610 kubelet[2837]: E0123 19:29:12.576403 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:12.696966 kubelet[2837]: I0123 19:29:12.692214 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dnd6l" podStartSLOduration=6.692183606 podStartE2EDuration="6.692183606s" podCreationTimestamp="2026-01-23 19:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:29:10.558176557 +0000 UTC m=+7.235716410" watchObservedRunningTime="2026-01-23 19:29:12.692183606 +0000 UTC m=+9.369723440" Jan 23 19:29:13.599964 kubelet[2837]: E0123 19:29:13.597739 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:14.260028 kubelet[2837]: E0123 19:29:14.259990 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:14.623168 kubelet[2837]: E0123 19:29:14.620882 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:17.286928 containerd[1548]: time="2026-01-23T19:29:17.286867311Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:29:17.294334 containerd[1548]: time="2026-01-23T19:29:17.294229024Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 19:29:17.302770 containerd[1548]: time="2026-01-23T19:29:17.302693161Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:29:17.330365 containerd[1548]: time="2026-01-23T19:29:17.330188730Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:29:17.331892 containerd[1548]: time="2026-01-23T19:29:17.331647860Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 6.909461035s" Jan 23 19:29:17.331892 containerd[1548]: time="2026-01-23T19:29:17.331757152Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 19:29:17.344717 containerd[1548]: time="2026-01-23T19:29:17.343550480Z" level=info msg="CreateContainer within sandbox \"ef445ca20768cd4f7fb55b4f891f7a1623b9579edd7b2ea5030210250295a88a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 19:29:17.390382 containerd[1548]: time="2026-01-23T19:29:17.390331201Z" level=info msg="Container 476a445c9f2593f2ef0d9a9338121307bc1eaa0a5e32cc3e0083d8675db85dcd: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:29:17.433209 containerd[1548]: time="2026-01-23T19:29:17.432241916Z" level=info msg="CreateContainer within sandbox \"ef445ca20768cd4f7fb55b4f891f7a1623b9579edd7b2ea5030210250295a88a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"476a445c9f2593f2ef0d9a9338121307bc1eaa0a5e32cc3e0083d8675db85dcd\"" Jan 23 19:29:17.433440 containerd[1548]: time="2026-01-23T19:29:17.433221404Z" level=info msg="StartContainer for \"476a445c9f2593f2ef0d9a9338121307bc1eaa0a5e32cc3e0083d8675db85dcd\"" Jan 23 19:29:17.434280 containerd[1548]: time="2026-01-23T19:29:17.434202260Z" level=info msg="connecting to shim 476a445c9f2593f2ef0d9a9338121307bc1eaa0a5e32cc3e0083d8675db85dcd" address="unix:///run/containerd/s/cec3058fdcf8464b2f8ff60515a206eb27ce1e981258ac61644f49d190868cea" protocol=ttrpc version=3 Jan 23 19:29:17.599038 systemd[1]: Started cri-containerd-476a445c9f2593f2ef0d9a9338121307bc1eaa0a5e32cc3e0083d8675db85dcd.scope - libcontainer container 476a445c9f2593f2ef0d9a9338121307bc1eaa0a5e32cc3e0083d8675db85dcd. Jan 23 19:29:17.724862 containerd[1548]: time="2026-01-23T19:29:17.724696542Z" level=info msg="StartContainer for \"476a445c9f2593f2ef0d9a9338121307bc1eaa0a5e32cc3e0083d8675db85dcd\" returns successfully" Jan 23 19:29:29.737851 sudo[1757]: pam_unix(sudo:session): session closed for user root Jan 23 19:29:29.788380 sshd[1756]: Connection closed by 10.0.0.1 port 40534 Jan 23 19:29:29.787742 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Jan 23 19:29:29.801794 systemd[1]: sshd@6-10.0.0.128:22-10.0.0.1:40534.service: Deactivated successfully. Jan 23 19:29:29.832222 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 19:29:29.837691 systemd[1]: session-7.scope: Consumed 15.193s CPU time, 220.7M memory peak. Jan 23 19:29:29.853363 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. Jan 23 19:29:29.858551 systemd-logind[1534]: Removed session 7. Jan 23 19:29:45.971649 kubelet[2837]: I0123 19:29:45.970401 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-khtsm" podStartSLOduration=30.822854953 podStartE2EDuration="37.970378592s" podCreationTimestamp="2026-01-23 19:29:08 +0000 UTC" firstStartedPulling="2026-01-23 19:29:10.186014557 +0000 UTC m=+6.863554390" lastFinishedPulling="2026-01-23 19:29:17.333538195 +0000 UTC m=+14.011078029" observedRunningTime="2026-01-23 19:29:18.786491659 +0000 UTC m=+15.464031492" watchObservedRunningTime="2026-01-23 19:29:45.970378592 +0000 UTC m=+42.647918485" Jan 23 19:29:46.100781 kubelet[2837]: I0123 19:29:46.089943 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6a3c13c-fde0-4844-a64e-4db3f7b6fc6c-tigera-ca-bundle\") pod \"calico-typha-766b94fd46-24g7w\" (UID: \"b6a3c13c-fde0-4844-a64e-4db3f7b6fc6c\") " pod="calico-system/calico-typha-766b94fd46-24g7w" Jan 23 19:29:46.100781 kubelet[2837]: I0123 19:29:46.090023 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6knk\" (UniqueName: \"kubernetes.io/projected/b6a3c13c-fde0-4844-a64e-4db3f7b6fc6c-kube-api-access-h6knk\") pod \"calico-typha-766b94fd46-24g7w\" (UID: \"b6a3c13c-fde0-4844-a64e-4db3f7b6fc6c\") " pod="calico-system/calico-typha-766b94fd46-24g7w" Jan 23 19:29:46.100781 kubelet[2837]: I0123 19:29:46.090053 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b6a3c13c-fde0-4844-a64e-4db3f7b6fc6c-typha-certs\") pod \"calico-typha-766b94fd46-24g7w\" (UID: \"b6a3c13c-fde0-4844-a64e-4db3f7b6fc6c\") " pod="calico-system/calico-typha-766b94fd46-24g7w" Jan 23 19:29:46.160390 systemd[1]: Created slice kubepods-besteffort-podb6a3c13c_fde0_4844_a64e_4db3f7b6fc6c.slice - libcontainer container kubepods-besteffort-podb6a3c13c_fde0_4844_a64e_4db3f7b6fc6c.slice. Jan 23 19:29:46.473453 kubelet[2837]: E0123 19:29:46.473420 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:46.481491 containerd[1548]: time="2026-01-23T19:29:46.474788207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-766b94fd46-24g7w,Uid:b6a3c13c-fde0-4844-a64e-4db3f7b6fc6c,Namespace:calico-system,Attempt:0,}" Jan 23 19:29:46.602555 kubelet[2837]: I0123 19:29:46.601498 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/51a90530-d91f-45ff-90be-efee2aeb302c-flexvol-driver-host\") pod \"calico-node-68ckt\" (UID: \"51a90530-d91f-45ff-90be-efee2aeb302c\") " pod="calico-system/calico-node-68ckt" Jan 23 19:29:46.602555 kubelet[2837]: I0123 19:29:46.601609 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/51a90530-d91f-45ff-90be-efee2aeb302c-policysync\") pod \"calico-node-68ckt\" (UID: \"51a90530-d91f-45ff-90be-efee2aeb302c\") " pod="calico-system/calico-node-68ckt" Jan 23 19:29:46.602555 kubelet[2837]: I0123 19:29:46.601640 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/51a90530-d91f-45ff-90be-efee2aeb302c-var-lib-calico\") pod \"calico-node-68ckt\" (UID: \"51a90530-d91f-45ff-90be-efee2aeb302c\") " pod="calico-system/calico-node-68ckt" Jan 23 19:29:46.602555 kubelet[2837]: I0123 19:29:46.601664 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/51a90530-d91f-45ff-90be-efee2aeb302c-cni-net-dir\") pod \"calico-node-68ckt\" (UID: \"51a90530-d91f-45ff-90be-efee2aeb302c\") " pod="calico-system/calico-node-68ckt" Jan 23 19:29:46.602555 kubelet[2837]: I0123 19:29:46.601689 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51a90530-d91f-45ff-90be-efee2aeb302c-lib-modules\") pod \"calico-node-68ckt\" (UID: \"51a90530-d91f-45ff-90be-efee2aeb302c\") " pod="calico-system/calico-node-68ckt" Jan 23 19:29:46.627484 kubelet[2837]: I0123 19:29:46.601713 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51a90530-d91f-45ff-90be-efee2aeb302c-tigera-ca-bundle\") pod \"calico-node-68ckt\" (UID: \"51a90530-d91f-45ff-90be-efee2aeb302c\") " pod="calico-system/calico-node-68ckt" Jan 23 19:29:46.627484 kubelet[2837]: I0123 19:29:46.601740 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/51a90530-d91f-45ff-90be-efee2aeb302c-var-run-calico\") pod \"calico-node-68ckt\" (UID: \"51a90530-d91f-45ff-90be-efee2aeb302c\") " pod="calico-system/calico-node-68ckt" Jan 23 19:29:46.627484 kubelet[2837]: I0123 19:29:46.601765 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/51a90530-d91f-45ff-90be-efee2aeb302c-cni-log-dir\") pod \"calico-node-68ckt\" (UID: \"51a90530-d91f-45ff-90be-efee2aeb302c\") " pod="calico-system/calico-node-68ckt" Jan 23 19:29:46.627484 kubelet[2837]: I0123 19:29:46.601822 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/51a90530-d91f-45ff-90be-efee2aeb302c-cni-bin-dir\") pod \"calico-node-68ckt\" (UID: \"51a90530-d91f-45ff-90be-efee2aeb302c\") " pod="calico-system/calico-node-68ckt" Jan 23 19:29:46.627484 kubelet[2837]: I0123 19:29:46.601849 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/51a90530-d91f-45ff-90be-efee2aeb302c-node-certs\") pod \"calico-node-68ckt\" (UID: \"51a90530-d91f-45ff-90be-efee2aeb302c\") " pod="calico-system/calico-node-68ckt" Jan 23 19:29:46.627683 kubelet[2837]: I0123 19:29:46.601873 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51a90530-d91f-45ff-90be-efee2aeb302c-xtables-lock\") pod \"calico-node-68ckt\" (UID: \"51a90530-d91f-45ff-90be-efee2aeb302c\") " pod="calico-system/calico-node-68ckt" Jan 23 19:29:46.627683 kubelet[2837]: I0123 19:29:46.601905 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sppcn\" (UniqueName: \"kubernetes.io/projected/51a90530-d91f-45ff-90be-efee2aeb302c-kube-api-access-sppcn\") pod \"calico-node-68ckt\" (UID: \"51a90530-d91f-45ff-90be-efee2aeb302c\") " pod="calico-system/calico-node-68ckt" Jan 23 19:29:46.665543 systemd[1]: Created slice kubepods-besteffort-pod51a90530_d91f_45ff_90be_efee2aeb302c.slice - libcontainer container kubepods-besteffort-pod51a90530_d91f_45ff_90be_efee2aeb302c.slice. Jan 23 19:29:46.735723 containerd[1548]: time="2026-01-23T19:29:46.735524523Z" level=info msg="connecting to shim 76ebf6e7107df829df8888444af192a1ba70a5b392241f34925ed6f33d465452" address="unix:///run/containerd/s/1ba9b485fb88ae1bffcbf9db4dc2f9769db81e6cbaac89616d6e53bcde304f1e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:29:46.748605 kubelet[2837]: E0123 19:29:46.748457 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.748605 kubelet[2837]: W0123 19:29:46.748513 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.748757 kubelet[2837]: E0123 19:29:46.748607 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.831711 systemd[1]: Started cri-containerd-76ebf6e7107df829df8888444af192a1ba70a5b392241f34925ed6f33d465452.scope - libcontainer container 76ebf6e7107df829df8888444af192a1ba70a5b392241f34925ed6f33d465452. Jan 23 19:29:46.833574 kubelet[2837]: E0123 19:29:46.833236 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.833574 kubelet[2837]: W0123 19:29:46.833337 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.833574 kubelet[2837]: E0123 19:29:46.833365 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.872144 kubelet[2837]: E0123 19:29:46.872096 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:29:46.898052 kubelet[2837]: E0123 19:29:46.897080 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.898052 kubelet[2837]: W0123 19:29:46.897109 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.898052 kubelet[2837]: E0123 19:29:46.897455 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.907690 kubelet[2837]: E0123 19:29:46.906623 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.907690 kubelet[2837]: W0123 19:29:46.906755 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.907690 kubelet[2837]: E0123 19:29:46.906786 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.909584 kubelet[2837]: E0123 19:29:46.909567 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.909671 kubelet[2837]: W0123 19:29:46.909658 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.909746 kubelet[2837]: E0123 19:29:46.909732 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.910213 kubelet[2837]: E0123 19:29:46.910199 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.910370 kubelet[2837]: W0123 19:29:46.910355 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.910431 kubelet[2837]: E0123 19:29:46.910419 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.910801 kubelet[2837]: E0123 19:29:46.910788 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.910871 kubelet[2837]: W0123 19:29:46.910860 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.910925 kubelet[2837]: E0123 19:29:46.910914 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.912603 kubelet[2837]: E0123 19:29:46.912586 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.912773 kubelet[2837]: W0123 19:29:46.912756 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.912847 kubelet[2837]: E0123 19:29:46.912833 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.914586 kubelet[2837]: E0123 19:29:46.914566 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.914739 kubelet[2837]: W0123 19:29:46.914665 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.914739 kubelet[2837]: E0123 19:29:46.914685 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.915217 kubelet[2837]: E0123 19:29:46.915203 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.915417 kubelet[2837]: W0123 19:29:46.915349 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.915417 kubelet[2837]: E0123 19:29:46.915365 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.915884 kubelet[2837]: E0123 19:29:46.915870 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.915961 kubelet[2837]: W0123 19:29:46.915949 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.916030 kubelet[2837]: E0123 19:29:46.916017 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.916560 kubelet[2837]: E0123 19:29:46.916488 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.916560 kubelet[2837]: W0123 19:29:46.916502 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.916560 kubelet[2837]: E0123 19:29:46.916513 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.917063 kubelet[2837]: E0123 19:29:46.916994 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.917063 kubelet[2837]: W0123 19:29:46.917007 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.917063 kubelet[2837]: E0123 19:29:46.917017 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.917635 kubelet[2837]: E0123 19:29:46.917620 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.917732 kubelet[2837]: W0123 19:29:46.917695 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.917732 kubelet[2837]: E0123 19:29:46.917711 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.918391 kubelet[2837]: E0123 19:29:46.918377 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.918613 kubelet[2837]: W0123 19:29:46.918468 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.918613 kubelet[2837]: E0123 19:29:46.918486 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.919495 kubelet[2837]: E0123 19:29:46.919480 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.919577 kubelet[2837]: W0123 19:29:46.919564 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.919748 kubelet[2837]: E0123 19:29:46.919672 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.920428 kubelet[2837]: E0123 19:29:46.920352 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.920428 kubelet[2837]: W0123 19:29:46.920368 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.920428 kubelet[2837]: E0123 19:29:46.920380 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.920901 kubelet[2837]: E0123 19:29:46.920854 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.920901 kubelet[2837]: W0123 19:29:46.920870 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.920901 kubelet[2837]: E0123 19:29:46.920881 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.921634 kubelet[2837]: E0123 19:29:46.921618 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.921853 kubelet[2837]: W0123 19:29:46.921706 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.921853 kubelet[2837]: E0123 19:29:46.921722 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.922439 kubelet[2837]: E0123 19:29:46.922423 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.922529 kubelet[2837]: W0123 19:29:46.922514 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.922598 kubelet[2837]: E0123 19:29:46.922582 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.923062 kubelet[2837]: E0123 19:29:46.922986 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.923062 kubelet[2837]: W0123 19:29:46.922999 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.923062 kubelet[2837]: E0123 19:29:46.923011 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.923570 kubelet[2837]: E0123 19:29:46.923527 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.923570 kubelet[2837]: W0123 19:29:46.923540 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.923570 kubelet[2837]: E0123 19:29:46.923551 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.924358 kubelet[2837]: E0123 19:29:46.924242 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.924358 kubelet[2837]: W0123 19:29:46.924327 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.924358 kubelet[2837]: E0123 19:29:46.924338 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.924752 kubelet[2837]: I0123 19:29:46.924513 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cbe68df-cea7-49bc-bbd7-253343631e45-socket-dir\") pod \"csi-node-driver-pspcd\" (UID: \"7cbe68df-cea7-49bc-bbd7-253343631e45\") " pod="calico-system/csi-node-driver-pspcd" Jan 23 19:29:46.924907 kubelet[2837]: E0123 19:29:46.924895 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.924971 kubelet[2837]: W0123 19:29:46.924960 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.925030 kubelet[2837]: E0123 19:29:46.925016 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.927322 kubelet[2837]: E0123 19:29:46.927242 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.927454 kubelet[2837]: W0123 19:29:46.927394 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.927454 kubelet[2837]: E0123 19:29:46.927411 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.928418 kubelet[2837]: E0123 19:29:46.928370 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.928418 kubelet[2837]: W0123 19:29:46.928386 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.928418 kubelet[2837]: E0123 19:29:46.928399 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.929189 kubelet[2837]: I0123 19:29:46.928968 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7cbe68df-cea7-49bc-bbd7-253343631e45-varrun\") pod \"csi-node-driver-pspcd\" (UID: \"7cbe68df-cea7-49bc-bbd7-253343631e45\") " pod="calico-system/csi-node-driver-pspcd" Jan 23 19:29:46.931721 kubelet[2837]: E0123 19:29:46.931581 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.931721 kubelet[2837]: W0123 19:29:46.931593 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.931721 kubelet[2837]: E0123 19:29:46.931607 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.933715 kubelet[2837]: E0123 19:29:46.933696 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.933854 kubelet[2837]: W0123 19:29:46.933840 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.933926 kubelet[2837]: E0123 19:29:46.933910 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.934552 kubelet[2837]: E0123 19:29:46.934531 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.935432 kubelet[2837]: W0123 19:29:46.935071 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.935432 kubelet[2837]: E0123 19:29:46.935152 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.935432 kubelet[2837]: I0123 19:29:46.935182 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvkh9\" (UniqueName: \"kubernetes.io/projected/7cbe68df-cea7-49bc-bbd7-253343631e45-kube-api-access-jvkh9\") pod \"csi-node-driver-pspcd\" (UID: \"7cbe68df-cea7-49bc-bbd7-253343631e45\") " pod="calico-system/csi-node-driver-pspcd" Jan 23 19:29:46.937159 kubelet[2837]: E0123 19:29:46.937013 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.937530 kubelet[2837]: W0123 19:29:46.937512 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.937695 kubelet[2837]: E0123 19:29:46.937636 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.938900 kubelet[2837]: I0123 19:29:46.938750 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cbe68df-cea7-49bc-bbd7-253343631e45-kubelet-dir\") pod \"csi-node-driver-pspcd\" (UID: \"7cbe68df-cea7-49bc-bbd7-253343631e45\") " pod="calico-system/csi-node-driver-pspcd" Jan 23 19:29:46.940336 kubelet[2837]: E0123 19:29:46.939637 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.940336 kubelet[2837]: W0123 19:29:46.939656 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.940336 kubelet[2837]: E0123 19:29:46.939669 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.940999 kubelet[2837]: E0123 19:29:46.940950 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.940999 kubelet[2837]: W0123 19:29:46.940967 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.940999 kubelet[2837]: E0123 19:29:46.940981 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.942590 kubelet[2837]: E0123 19:29:46.942541 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.942590 kubelet[2837]: W0123 19:29:46.942558 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.942590 kubelet[2837]: E0123 19:29:46.942572 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.945466 kubelet[2837]: E0123 19:29:46.945403 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.945466 kubelet[2837]: W0123 19:29:46.945427 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.945466 kubelet[2837]: E0123 19:29:46.945445 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.945735 kubelet[2837]: I0123 19:29:46.945656 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cbe68df-cea7-49bc-bbd7-253343631e45-registration-dir\") pod \"csi-node-driver-pspcd\" (UID: \"7cbe68df-cea7-49bc-bbd7-253343631e45\") " pod="calico-system/csi-node-driver-pspcd" Jan 23 19:29:46.949156 kubelet[2837]: E0123 19:29:46.949096 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.949156 kubelet[2837]: W0123 19:29:46.949118 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.949156 kubelet[2837]: E0123 19:29:46.949134 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.951642 kubelet[2837]: E0123 19:29:46.951544 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.951642 kubelet[2837]: W0123 19:29:46.951563 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.951642 kubelet[2837]: E0123 19:29:46.951581 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.953438 kubelet[2837]: E0123 19:29:46.953379 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:46.953438 kubelet[2837]: W0123 19:29:46.953396 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:46.953438 kubelet[2837]: E0123 19:29:46.953411 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:46.979434 kubelet[2837]: E0123 19:29:46.978915 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:46.986140 containerd[1548]: time="2026-01-23T19:29:46.986011162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68ckt,Uid:51a90530-d91f-45ff-90be-efee2aeb302c,Namespace:calico-system,Attempt:0,}" Jan 23 19:29:47.054121 kubelet[2837]: E0123 19:29:47.053182 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.054121 kubelet[2837]: W0123 19:29:47.053215 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.054121 kubelet[2837]: E0123 19:29:47.053450 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.055742 kubelet[2837]: E0123 19:29:47.055552 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.055742 kubelet[2837]: W0123 19:29:47.055572 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.055742 kubelet[2837]: E0123 19:29:47.055593 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.065725 kubelet[2837]: E0123 19:29:47.062411 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.065725 kubelet[2837]: W0123 19:29:47.062533 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.065725 kubelet[2837]: E0123 19:29:47.062670 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.065725 kubelet[2837]: E0123 19:29:47.064444 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.065725 kubelet[2837]: W0123 19:29:47.064460 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.065725 kubelet[2837]: E0123 19:29:47.064478 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.069523 kubelet[2837]: E0123 19:29:47.069355 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.070973 kubelet[2837]: W0123 19:29:47.070383 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.072317 kubelet[2837]: E0123 19:29:47.071432 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.076536 kubelet[2837]: E0123 19:29:47.076489 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.076612 kubelet[2837]: W0123 19:29:47.076513 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.076612 kubelet[2837]: E0123 19:29:47.076574 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.080332 kubelet[2837]: E0123 19:29:47.077509 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.080332 kubelet[2837]: W0123 19:29:47.077528 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.080332 kubelet[2837]: E0123 19:29:47.077547 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.087885 kubelet[2837]: E0123 19:29:47.087851 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.088065 kubelet[2837]: W0123 19:29:47.088042 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.088359 kubelet[2837]: E0123 19:29:47.088229 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.092339 kubelet[2837]: E0123 19:29:47.090827 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.092339 kubelet[2837]: W0123 19:29:47.091043 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.092339 kubelet[2837]: E0123 19:29:47.091064 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.092930 kubelet[2837]: E0123 19:29:47.092912 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.112732 kubelet[2837]: W0123 19:29:47.107670 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.112732 kubelet[2837]: E0123 19:29:47.109399 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.114130 kubelet[2837]: E0123 19:29:47.114111 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.114228 kubelet[2837]: W0123 19:29:47.114208 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.114421 kubelet[2837]: E0123 19:29:47.114401 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.121039 containerd[1548]: time="2026-01-23T19:29:47.119174503Z" level=info msg="connecting to shim 2063169b095afda498d3d98fc8070af33fafd372fa9c1a83baafabcaeebb4b68" address="unix:///run/containerd/s/709ad41456b962e80eeaac78d46c7801cb64484d81ddb826626e7ff2c7cc018d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:29:47.122855 kubelet[2837]: E0123 19:29:47.122836 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.122985 kubelet[2837]: W0123 19:29:47.122967 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.123066 kubelet[2837]: E0123 19:29:47.123051 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.125346 kubelet[2837]: E0123 19:29:47.125327 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.127604 kubelet[2837]: W0123 19:29:47.127580 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.127792 kubelet[2837]: E0123 19:29:47.127772 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.139560 kubelet[2837]: E0123 19:29:47.139527 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.139907 kubelet[2837]: W0123 19:29:47.139754 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.140186 kubelet[2837]: E0123 19:29:47.140073 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.146518 kubelet[2837]: E0123 19:29:47.146498 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.146738 kubelet[2837]: W0123 19:29:47.146716 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.146886 kubelet[2837]: E0123 19:29:47.146865 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.153836 kubelet[2837]: E0123 19:29:47.153780 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.153836 kubelet[2837]: W0123 19:29:47.153820 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.153973 kubelet[2837]: E0123 19:29:47.153840 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.158623 kubelet[2837]: E0123 19:29:47.158475 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.158623 kubelet[2837]: W0123 19:29:47.158497 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.158623 kubelet[2837]: E0123 19:29:47.158519 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.161574 kubelet[2837]: E0123 19:29:47.161457 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.161574 kubelet[2837]: W0123 19:29:47.161502 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.161574 kubelet[2837]: E0123 19:29:47.161521 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.162027 kubelet[2837]: E0123 19:29:47.161925 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.162027 kubelet[2837]: W0123 19:29:47.161976 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.162027 kubelet[2837]: E0123 19:29:47.161992 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.166481 kubelet[2837]: E0123 19:29:47.164438 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.166481 kubelet[2837]: W0123 19:29:47.164456 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.166481 kubelet[2837]: E0123 19:29:47.164469 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.166972 kubelet[2837]: E0123 19:29:47.166935 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.166972 kubelet[2837]: W0123 19:29:47.166963 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.167063 kubelet[2837]: E0123 19:29:47.166977 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.167345 kubelet[2837]: E0123 19:29:47.167238 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.167394 kubelet[2837]: W0123 19:29:47.167346 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.167394 kubelet[2837]: E0123 19:29:47.167363 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.171055 kubelet[2837]: E0123 19:29:47.167884 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.171968 kubelet[2837]: W0123 19:29:47.171217 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.171968 kubelet[2837]: E0123 19:29:47.171322 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.172918 kubelet[2837]: E0123 19:29:47.172475 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.172918 kubelet[2837]: W0123 19:29:47.172491 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.172918 kubelet[2837]: E0123 19:29:47.172504 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.174362 kubelet[2837]: E0123 19:29:47.174128 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.174362 kubelet[2837]: W0123 19:29:47.174146 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.174362 kubelet[2837]: E0123 19:29:47.174162 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.178943 containerd[1548]: time="2026-01-23T19:29:47.178043127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-766b94fd46-24g7w,Uid:b6a3c13c-fde0-4844-a64e-4db3f7b6fc6c,Namespace:calico-system,Attempt:0,} returns sandbox id \"76ebf6e7107df829df8888444af192a1ba70a5b392241f34925ed6f33d465452\"" Jan 23 19:29:47.192580 kubelet[2837]: E0123 19:29:47.192058 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:47.197000 containerd[1548]: time="2026-01-23T19:29:47.196142359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 19:29:47.229022 kubelet[2837]: E0123 19:29:47.228864 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:47.229022 kubelet[2837]: W0123 19:29:47.228925 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:47.229022 kubelet[2837]: E0123 19:29:47.228961 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:47.277046 systemd[1]: Started cri-containerd-2063169b095afda498d3d98fc8070af33fafd372fa9c1a83baafabcaeebb4b68.scope - libcontainer container 2063169b095afda498d3d98fc8070af33fafd372fa9c1a83baafabcaeebb4b68. Jan 23 19:29:47.437328 containerd[1548]: time="2026-01-23T19:29:47.437095663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68ckt,Uid:51a90530-d91f-45ff-90be-efee2aeb302c,Namespace:calico-system,Attempt:0,} returns sandbox id \"2063169b095afda498d3d98fc8070af33fafd372fa9c1a83baafabcaeebb4b68\"" Jan 23 19:29:47.441939 kubelet[2837]: E0123 19:29:47.441018 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:48.586687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1129857680.mount: Deactivated successfully. Jan 23 19:29:48.887957 kubelet[2837]: E0123 19:29:48.886097 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:29:50.885572 kubelet[2837]: E0123 19:29:50.884904 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:29:52.883782 kubelet[2837]: E0123 19:29:52.883604 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:29:53.052017 containerd[1548]: time="2026-01-23T19:29:53.051862312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:29:53.057578 containerd[1548]: time="2026-01-23T19:29:53.057256804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 23 19:29:53.062457 containerd[1548]: time="2026-01-23T19:29:53.060044398Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:29:53.070999 containerd[1548]: time="2026-01-23T19:29:53.069737228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:29:53.071218 containerd[1548]: time="2026-01-23T19:29:53.071130345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 5.874917945s" Jan 23 19:29:53.071218 containerd[1548]: time="2026-01-23T19:29:53.071187772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 19:29:53.074995 containerd[1548]: time="2026-01-23T19:29:53.074522566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 19:29:53.152918 containerd[1548]: time="2026-01-23T19:29:53.152682135Z" level=info msg="CreateContainer within sandbox \"76ebf6e7107df829df8888444af192a1ba70a5b392241f34925ed6f33d465452\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 19:29:53.215973 containerd[1548]: time="2026-01-23T19:29:53.215683308Z" level=info msg="Container 0d841ae0107562f84964ea0f5026c8d9bc08bb828c0bc1a2cfcb6b7f218e5511: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:29:53.287488 containerd[1548]: time="2026-01-23T19:29:53.285433503Z" level=info msg="CreateContainer within sandbox \"76ebf6e7107df829df8888444af192a1ba70a5b392241f34925ed6f33d465452\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0d841ae0107562f84964ea0f5026c8d9bc08bb828c0bc1a2cfcb6b7f218e5511\"" Jan 23 19:29:53.287488 containerd[1548]: time="2026-01-23T19:29:53.287229218Z" level=info msg="StartContainer for \"0d841ae0107562f84964ea0f5026c8d9bc08bb828c0bc1a2cfcb6b7f218e5511\"" Jan 23 19:29:53.291217 containerd[1548]: time="2026-01-23T19:29:53.290248595Z" level=info msg="connecting to shim 0d841ae0107562f84964ea0f5026c8d9bc08bb828c0bc1a2cfcb6b7f218e5511" address="unix:///run/containerd/s/1ba9b485fb88ae1bffcbf9db4dc2f9769db81e6cbaac89616d6e53bcde304f1e" protocol=ttrpc version=3 Jan 23 19:29:53.385044 systemd[1]: Started cri-containerd-0d841ae0107562f84964ea0f5026c8d9bc08bb828c0bc1a2cfcb6b7f218e5511.scope - libcontainer container 0d841ae0107562f84964ea0f5026c8d9bc08bb828c0bc1a2cfcb6b7f218e5511. Jan 23 19:29:53.721990 containerd[1548]: time="2026-01-23T19:29:53.717622075Z" level=info msg="StartContainer for \"0d841ae0107562f84964ea0f5026c8d9bc08bb828c0bc1a2cfcb6b7f218e5511\" returns successfully" Jan 23 19:29:54.520962 kubelet[2837]: E0123 19:29:54.520818 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:54.526641 kubelet[2837]: E0123 19:29:54.526537 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.527011 kubelet[2837]: W0123 19:29:54.526835 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.527011 kubelet[2837]: E0123 19:29:54.526875 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.531379 kubelet[2837]: E0123 19:29:54.531225 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.531379 kubelet[2837]: W0123 19:29:54.531245 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.531553 kubelet[2837]: E0123 19:29:54.531532 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.532673 kubelet[2837]: E0123 19:29:54.532575 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.532673 kubelet[2837]: W0123 19:29:54.532594 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.532673 kubelet[2837]: E0123 19:29:54.532611 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.533089 kubelet[2837]: E0123 19:29:54.533074 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.533176 kubelet[2837]: W0123 19:29:54.533160 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.533256 kubelet[2837]: E0123 19:29:54.533241 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.534097 kubelet[2837]: E0123 19:29:54.534017 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.534097 kubelet[2837]: W0123 19:29:54.534032 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.534097 kubelet[2837]: E0123 19:29:54.534045 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.538470 kubelet[2837]: E0123 19:29:54.538448 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.538674 kubelet[2837]: W0123 19:29:54.538582 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.538674 kubelet[2837]: E0123 19:29:54.538607 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.541903 kubelet[2837]: E0123 19:29:54.541757 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.541903 kubelet[2837]: W0123 19:29:54.541774 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.541903 kubelet[2837]: E0123 19:29:54.541791 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.547021 kubelet[2837]: E0123 19:29:54.546760 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.547021 kubelet[2837]: W0123 19:29:54.546818 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.547021 kubelet[2837]: E0123 19:29:54.546837 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.549601 kubelet[2837]: E0123 19:29:54.549580 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.549896 kubelet[2837]: W0123 19:29:54.549707 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.549896 kubelet[2837]: E0123 19:29:54.549737 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.552449 kubelet[2837]: E0123 19:29:54.552430 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.552840 kubelet[2837]: W0123 19:29:54.552811 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.553851 kubelet[2837]: E0123 19:29:54.553683 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.554704 kubelet[2837]: E0123 19:29:54.554688 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.554774 kubelet[2837]: W0123 19:29:54.554761 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.554950 kubelet[2837]: E0123 19:29:54.554906 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.558574 kubelet[2837]: E0123 19:29:54.558551 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.558665 kubelet[2837]: W0123 19:29:54.558651 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.558753 kubelet[2837]: E0123 19:29:54.558737 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.563639 kubelet[2837]: E0123 19:29:54.563523 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.563639 kubelet[2837]: W0123 19:29:54.563569 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.563639 kubelet[2837]: E0123 19:29:54.563594 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.568824 kubelet[2837]: E0123 19:29:54.568791 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.568824 kubelet[2837]: W0123 19:29:54.568815 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.568824 kubelet[2837]: E0123 19:29:54.568836 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.571167 kubelet[2837]: E0123 19:29:54.571098 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.571167 kubelet[2837]: W0123 19:29:54.571143 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.571167 kubelet[2837]: E0123 19:29:54.571163 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.573468 kubelet[2837]: E0123 19:29:54.572800 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.573468 kubelet[2837]: W0123 19:29:54.572888 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.573468 kubelet[2837]: E0123 19:29:54.572906 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.576503 kubelet[2837]: E0123 19:29:54.576396 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.580575 kubelet[2837]: W0123 19:29:54.576522 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.580575 kubelet[2837]: E0123 19:29:54.576545 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.585605 kubelet[2837]: E0123 19:29:54.581819 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.585605 kubelet[2837]: W0123 19:29:54.581840 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.585605 kubelet[2837]: E0123 19:29:54.582036 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.585605 kubelet[2837]: E0123 19:29:54.585543 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.585605 kubelet[2837]: W0123 19:29:54.585567 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.585605 kubelet[2837]: E0123 19:29:54.585599 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.587898 kubelet[2837]: E0123 19:29:54.586671 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.587898 kubelet[2837]: W0123 19:29:54.586691 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.587898 kubelet[2837]: E0123 19:29:54.586707 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.596724 kubelet[2837]: E0123 19:29:54.596548 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.596724 kubelet[2837]: W0123 19:29:54.596582 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.596724 kubelet[2837]: E0123 19:29:54.596620 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.602843 kubelet[2837]: E0123 19:29:54.602760 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.602843 kubelet[2837]: W0123 19:29:54.602821 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.603082 kubelet[2837]: E0123 19:29:54.602854 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.608060 kubelet[2837]: E0123 19:29:54.606510 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.608060 kubelet[2837]: W0123 19:29:54.606557 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.608060 kubelet[2837]: E0123 19:29:54.606583 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.613563 kubelet[2837]: E0123 19:29:54.612533 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.613563 kubelet[2837]: W0123 19:29:54.612554 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.613563 kubelet[2837]: E0123 19:29:54.612579 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.622817 kubelet[2837]: E0123 19:29:54.621564 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.623567 kubelet[2837]: W0123 19:29:54.623474 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.623567 kubelet[2837]: E0123 19:29:54.623561 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.625249 kubelet[2837]: E0123 19:29:54.625054 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.625249 kubelet[2837]: W0123 19:29:54.625072 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.625249 kubelet[2837]: E0123 19:29:54.625087 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.629172 kubelet[2837]: E0123 19:29:54.625783 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.629172 kubelet[2837]: W0123 19:29:54.627238 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.629172 kubelet[2837]: E0123 19:29:54.627260 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.638395 kubelet[2837]: E0123 19:29:54.634512 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.638395 kubelet[2837]: W0123 19:29:54.634541 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.638395 kubelet[2837]: E0123 19:29:54.634567 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.641370 kubelet[2837]: E0123 19:29:54.639988 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.641370 kubelet[2837]: W0123 19:29:54.640010 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.641370 kubelet[2837]: E0123 19:29:54.640036 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.647939 kubelet[2837]: E0123 19:29:54.646788 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.647939 kubelet[2837]: W0123 19:29:54.646819 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.647939 kubelet[2837]: E0123 19:29:54.646851 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.667465 kubelet[2837]: E0123 19:29:54.660596 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.667465 kubelet[2837]: W0123 19:29:54.660656 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.667465 kubelet[2837]: E0123 19:29:54.660685 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.667465 kubelet[2837]: E0123 19:29:54.666055 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.667465 kubelet[2837]: W0123 19:29:54.666077 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.667465 kubelet[2837]: E0123 19:29:54.666107 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.667465 kubelet[2837]: E0123 19:29:54.666578 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:54.667465 kubelet[2837]: W0123 19:29:54.666590 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:54.667465 kubelet[2837]: E0123 19:29:54.666604 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:54.774944 kubelet[2837]: I0123 19:29:54.772046 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-766b94fd46-24g7w" podStartSLOduration=3.8945501289999997 podStartE2EDuration="9.772019338s" podCreationTimestamp="2026-01-23 19:29:45 +0000 UTC" firstStartedPulling="2026-01-23 19:29:47.195395508 +0000 UTC m=+43.872935341" lastFinishedPulling="2026-01-23 19:29:53.072864707 +0000 UTC m=+49.750404550" observedRunningTime="2026-01-23 19:29:54.635494298 +0000 UTC m=+51.313034141" watchObservedRunningTime="2026-01-23 19:29:54.772019338 +0000 UTC m=+51.449559191" Jan 23 19:29:54.888424 kubelet[2837]: E0123 19:29:54.883667 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:29:55.522407 kubelet[2837]: E0123 19:29:55.522227 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:55.618916 kubelet[2837]: E0123 19:29:55.616191 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.618916 kubelet[2837]: W0123 19:29:55.616222 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.618916 kubelet[2837]: E0123 19:29:55.616252 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.622770 kubelet[2837]: E0123 19:29:55.622745 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.623081 kubelet[2837]: W0123 19:29:55.622884 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.623081 kubelet[2837]: E0123 19:29:55.622948 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.628799 kubelet[2837]: E0123 19:29:55.628665 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.628799 kubelet[2837]: W0123 19:29:55.628719 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.628799 kubelet[2837]: E0123 19:29:55.628777 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.632134 kubelet[2837]: E0123 19:29:55.631846 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.632134 kubelet[2837]: W0123 19:29:55.631868 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.632134 kubelet[2837]: E0123 19:29:55.631888 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.637826 kubelet[2837]: E0123 19:29:55.634844 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.637826 kubelet[2837]: W0123 19:29:55.634861 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.637826 kubelet[2837]: E0123 19:29:55.634878 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.643655 kubelet[2837]: E0123 19:29:55.641491 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.643655 kubelet[2837]: W0123 19:29:55.641527 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.643655 kubelet[2837]: E0123 19:29:55.641547 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.643655 kubelet[2837]: E0123 19:29:55.642660 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.643655 kubelet[2837]: W0123 19:29:55.642671 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.643655 kubelet[2837]: E0123 19:29:55.642683 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.644674 kubelet[2837]: E0123 19:29:55.644113 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.644674 kubelet[2837]: W0123 19:29:55.644128 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.644674 kubelet[2837]: E0123 19:29:55.644141 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.649361 kubelet[2837]: E0123 19:29:55.647145 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.649361 kubelet[2837]: W0123 19:29:55.648217 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.650732 kubelet[2837]: E0123 19:29:55.649868 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.651728 kubelet[2837]: E0123 19:29:55.651391 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.651728 kubelet[2837]: W0123 19:29:55.651430 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.651728 kubelet[2837]: E0123 19:29:55.651445 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.654445 kubelet[2837]: E0123 19:29:55.652100 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.654445 kubelet[2837]: W0123 19:29:55.653420 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.654445 kubelet[2837]: E0123 19:29:55.653437 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.654445 kubelet[2837]: E0123 19:29:55.653689 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.654445 kubelet[2837]: W0123 19:29:55.653700 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.654445 kubelet[2837]: E0123 19:29:55.653713 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.654445 kubelet[2837]: E0123 19:29:55.654140 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.654445 kubelet[2837]: W0123 19:29:55.654153 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.654445 kubelet[2837]: E0123 19:29:55.654166 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.663121 kubelet[2837]: E0123 19:29:55.662731 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.663121 kubelet[2837]: W0123 19:29:55.662779 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.663121 kubelet[2837]: E0123 19:29:55.662801 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.663781 kubelet[2837]: E0123 19:29:55.663706 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.663781 kubelet[2837]: W0123 19:29:55.663754 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.663781 kubelet[2837]: E0123 19:29:55.663771 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.667356 kubelet[2837]: E0123 19:29:55.666209 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.667356 kubelet[2837]: W0123 19:29:55.666229 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.667356 kubelet[2837]: E0123 19:29:55.666248 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.669403 kubelet[2837]: E0123 19:29:55.668243 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.669403 kubelet[2837]: W0123 19:29:55.668259 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.669403 kubelet[2837]: E0123 19:29:55.668345 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.671813 kubelet[2837]: E0123 19:29:55.669968 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.671813 kubelet[2837]: W0123 19:29:55.669981 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.671813 kubelet[2837]: E0123 19:29:55.669995 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.671813 kubelet[2837]: E0123 19:29:55.671137 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.671813 kubelet[2837]: W0123 19:29:55.671149 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.671813 kubelet[2837]: E0123 19:29:55.671160 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.672215 kubelet[2837]: E0123 19:29:55.671902 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.672215 kubelet[2837]: W0123 19:29:55.671913 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.672215 kubelet[2837]: E0123 19:29:55.671926 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.674760 kubelet[2837]: E0123 19:29:55.674023 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.674760 kubelet[2837]: W0123 19:29:55.674036 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.674760 kubelet[2837]: E0123 19:29:55.674049 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.676491 kubelet[2837]: E0123 19:29:55.676473 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.676610 kubelet[2837]: W0123 19:29:55.676569 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.676610 kubelet[2837]: E0123 19:29:55.676590 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.680613 kubelet[2837]: E0123 19:29:55.680548 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.680613 kubelet[2837]: W0123 19:29:55.680570 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.680613 kubelet[2837]: E0123 19:29:55.680588 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.683713 kubelet[2837]: E0123 19:29:55.683659 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.683713 kubelet[2837]: W0123 19:29:55.683680 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.683713 kubelet[2837]: E0123 19:29:55.683695 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.685558 kubelet[2837]: E0123 19:29:55.685504 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.685558 kubelet[2837]: W0123 19:29:55.685521 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.685558 kubelet[2837]: E0123 19:29:55.685538 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.693408 kubelet[2837]: E0123 19:29:55.688688 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.693408 kubelet[2837]: W0123 19:29:55.688707 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.693408 kubelet[2837]: E0123 19:29:55.688722 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.693830 kubelet[2837]: E0123 19:29:55.693803 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.693907 kubelet[2837]: W0123 19:29:55.693890 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.694001 kubelet[2837]: E0123 19:29:55.693980 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.698486 kubelet[2837]: E0123 19:29:55.698456 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.698730 kubelet[2837]: W0123 19:29:55.698614 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.698730 kubelet[2837]: E0123 19:29:55.698673 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.702617 kubelet[2837]: E0123 19:29:55.702575 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.702617 kubelet[2837]: W0123 19:29:55.702599 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.702617 kubelet[2837]: E0123 19:29:55.702620 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.715833 kubelet[2837]: E0123 19:29:55.714885 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.715833 kubelet[2837]: W0123 19:29:55.714907 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.715833 kubelet[2837]: E0123 19:29:55.714930 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.718179 kubelet[2837]: E0123 19:29:55.717492 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.718179 kubelet[2837]: W0123 19:29:55.717511 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.718179 kubelet[2837]: E0123 19:29:55.717527 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.719745 kubelet[2837]: E0123 19:29:55.719726 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.719908 kubelet[2837]: W0123 19:29:55.719824 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.719908 kubelet[2837]: E0123 19:29:55.719846 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:55.722006 kubelet[2837]: E0123 19:29:55.721988 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:29:55.722119 kubelet[2837]: W0123 19:29:55.722074 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:29:55.722119 kubelet[2837]: E0123 19:29:55.722091 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:29:56.041477 containerd[1548]: time="2026-01-23T19:29:56.038673041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:29:56.041477 containerd[1548]: time="2026-01-23T19:29:56.039885841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 23 19:29:56.043654 containerd[1548]: time="2026-01-23T19:29:56.043455953Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:29:56.067547 containerd[1548]: time="2026-01-23T19:29:56.067492904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:29:56.068983 containerd[1548]: time="2026-01-23T19:29:56.068928219Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.994359718s" Jan 23 19:29:56.069421 containerd[1548]: time="2026-01-23T19:29:56.069093587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 19:29:56.115714 containerd[1548]: time="2026-01-23T19:29:56.114218203Z" level=info msg="CreateContainer within sandbox \"2063169b095afda498d3d98fc8070af33fafd372fa9c1a83baafabcaeebb4b68\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 19:29:56.185433 containerd[1548]: time="2026-01-23T19:29:56.181643272Z" level=info msg="Container a05f6b3616a677946605d9d800b2a250ab5d1e7845bc765bae0082454b53ca63: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:29:56.253781 containerd[1548]: time="2026-01-23T19:29:56.251440579Z" level=info msg="CreateContainer within sandbox \"2063169b095afda498d3d98fc8070af33fafd372fa9c1a83baafabcaeebb4b68\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a05f6b3616a677946605d9d800b2a250ab5d1e7845bc765bae0082454b53ca63\"" Jan 23 19:29:56.253781 containerd[1548]: time="2026-01-23T19:29:56.253774438Z" level=info msg="StartContainer for \"a05f6b3616a677946605d9d800b2a250ab5d1e7845bc765bae0082454b53ca63\"" Jan 23 19:29:56.260422 containerd[1548]: time="2026-01-23T19:29:56.259942398Z" level=info msg="connecting to shim a05f6b3616a677946605d9d800b2a250ab5d1e7845bc765bae0082454b53ca63" address="unix:///run/containerd/s/709ad41456b962e80eeaac78d46c7801cb64484d81ddb826626e7ff2c7cc018d" protocol=ttrpc version=3 Jan 23 19:29:56.459086 systemd[1]: Started cri-containerd-a05f6b3616a677946605d9d800b2a250ab5d1e7845bc765bae0082454b53ca63.scope - libcontainer container a05f6b3616a677946605d9d800b2a250ab5d1e7845bc765bae0082454b53ca63. Jan 23 19:29:56.666179 containerd[1548]: time="2026-01-23T19:29:56.666012254Z" level=info msg="StartContainer for \"a05f6b3616a677946605d9d800b2a250ab5d1e7845bc765bae0082454b53ca63\" returns successfully" Jan 23 19:29:56.677579 systemd[1]: cri-containerd-a05f6b3616a677946605d9d800b2a250ab5d1e7845bc765bae0082454b53ca63.scope: Deactivated successfully. Jan 23 19:29:56.686161 containerd[1548]: time="2026-01-23T19:29:56.686054519Z" level=info msg="received container exit event container_id:\"a05f6b3616a677946605d9d800b2a250ab5d1e7845bc765bae0082454b53ca63\" id:\"a05f6b3616a677946605d9d800b2a250ab5d1e7845bc765bae0082454b53ca63\" pid:3591 exited_at:{seconds:1769196596 nanos:683915933}" Jan 23 19:29:56.790361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a05f6b3616a677946605d9d800b2a250ab5d1e7845bc765bae0082454b53ca63-rootfs.mount: Deactivated successfully. Jan 23 19:29:56.884247 kubelet[2837]: E0123 19:29:56.884099 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:29:57.546495 kubelet[2837]: E0123 19:29:57.544651 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:57.548900 containerd[1548]: time="2026-01-23T19:29:57.548197757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 19:29:58.943251 kubelet[2837]: E0123 19:29:58.938844 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:30:00.895592 kubelet[2837]: E0123 19:30:00.892961 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:30:02.886779 kubelet[2837]: E0123 19:30:02.886715 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:30:04.886476 kubelet[2837]: E0123 19:30:04.885871 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:30:06.263440 containerd[1548]: time="2026-01-23T19:30:06.261166656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:30:06.263440 containerd[1548]: time="2026-01-23T19:30:06.263354649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 19:30:06.268829 containerd[1548]: time="2026-01-23T19:30:06.265521772Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:30:06.271394 containerd[1548]: time="2026-01-23T19:30:06.271217792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:30:06.272950 containerd[1548]: time="2026-01-23T19:30:06.272557839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 8.724314257s" Jan 23 19:30:06.272950 containerd[1548]: time="2026-01-23T19:30:06.272596872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 19:30:06.321331 containerd[1548]: time="2026-01-23T19:30:06.319844519Z" level=info msg="CreateContainer within sandbox \"2063169b095afda498d3d98fc8070af33fafd372fa9c1a83baafabcaeebb4b68\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 19:30:06.378947 containerd[1548]: time="2026-01-23T19:30:06.378835254Z" level=info msg="Container 7315affd1cb307afd09b90aa59074d63680f6eb953f4d59851990859b9f20a31: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:30:06.418361 containerd[1548]: time="2026-01-23T19:30:06.418199711Z" level=info msg="CreateContainer within sandbox \"2063169b095afda498d3d98fc8070af33fafd372fa9c1a83baafabcaeebb4b68\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7315affd1cb307afd09b90aa59074d63680f6eb953f4d59851990859b9f20a31\"" Jan 23 19:30:06.420601 containerd[1548]: time="2026-01-23T19:30:06.420570722Z" level=info msg="StartContainer for \"7315affd1cb307afd09b90aa59074d63680f6eb953f4d59851990859b9f20a31\"" Jan 23 19:30:06.445964 containerd[1548]: time="2026-01-23T19:30:06.444597297Z" level=info msg="connecting to shim 7315affd1cb307afd09b90aa59074d63680f6eb953f4d59851990859b9f20a31" address="unix:///run/containerd/s/709ad41456b962e80eeaac78d46c7801cb64484d81ddb826626e7ff2c7cc018d" protocol=ttrpc version=3 Jan 23 19:30:06.529718 systemd[1]: Started cri-containerd-7315affd1cb307afd09b90aa59074d63680f6eb953f4d59851990859b9f20a31.scope - libcontainer container 7315affd1cb307afd09b90aa59074d63680f6eb953f4d59851990859b9f20a31. Jan 23 19:30:06.753132 containerd[1548]: time="2026-01-23T19:30:06.751617089Z" level=info msg="StartContainer for \"7315affd1cb307afd09b90aa59074d63680f6eb953f4d59851990859b9f20a31\" returns successfully" Jan 23 19:30:06.888678 kubelet[2837]: E0123 19:30:06.888514 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:30:07.657421 kubelet[2837]: E0123 19:30:07.657258 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:08.666411 kubelet[2837]: E0123 19:30:08.666104 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:08.884464 kubelet[2837]: E0123 19:30:08.882489 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:30:09.757448 systemd[1]: cri-containerd-7315affd1cb307afd09b90aa59074d63680f6eb953f4d59851990859b9f20a31.scope: Deactivated successfully. Jan 23 19:30:09.757889 systemd[1]: cri-containerd-7315affd1cb307afd09b90aa59074d63680f6eb953f4d59851990859b9f20a31.scope: Consumed 1.623s CPU time, 181.7M memory peak, 3.6M read from disk, 171.3M written to disk. Jan 23 19:30:09.815511 containerd[1548]: time="2026-01-23T19:30:09.812108971Z" level=info msg="received container exit event container_id:\"7315affd1cb307afd09b90aa59074d63680f6eb953f4d59851990859b9f20a31\" id:\"7315affd1cb307afd09b90aa59074d63680f6eb953f4d59851990859b9f20a31\" pid:3651 exited_at:{seconds:1769196609 nanos:764806083}" Jan 23 19:30:09.986416 kubelet[2837]: I0123 19:30:09.986170 2837 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 19:30:10.006790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7315affd1cb307afd09b90aa59074d63680f6eb953f4d59851990859b9f20a31-rootfs.mount: Deactivated successfully. Jan 23 19:30:10.301191 systemd[1]: Created slice kubepods-besteffort-pod87004552_13b2_409e_9fda_f933cdb145c9.slice - libcontainer container kubepods-besteffort-pod87004552_13b2_409e_9fda_f933cdb145c9.slice. Jan 23 19:30:10.309339 kubelet[2837]: I0123 19:30:10.309061 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/87004552-13b2-409e-9fda-f933cdb145c9-calico-apiserver-certs\") pod \"calico-apiserver-545cbc66db-s4jf2\" (UID: \"87004552-13b2-409e-9fda-f933cdb145c9\") " pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" Jan 23 19:30:10.309339 kubelet[2837]: I0123 19:30:10.309110 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljmkl\" (UniqueName: \"kubernetes.io/projected/87004552-13b2-409e-9fda-f933cdb145c9-kube-api-access-ljmkl\") pod \"calico-apiserver-545cbc66db-s4jf2\" (UID: \"87004552-13b2-409e-9fda-f933cdb145c9\") " pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" Jan 23 19:30:10.309339 kubelet[2837]: I0123 19:30:10.309133 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7btv4\" (UniqueName: \"kubernetes.io/projected/9ad2e315-a1c2-4385-9b78-2b5be4403617-kube-api-access-7btv4\") pod \"coredns-674b8bbfcf-c2srk\" (UID: \"9ad2e315-a1c2-4385-9b78-2b5be4403617\") " pod="kube-system/coredns-674b8bbfcf-c2srk" Jan 23 19:30:10.309339 kubelet[2837]: I0123 19:30:10.309156 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae1ba4f6-1230-4757-8b1a-af9cfe7ac401-tigera-ca-bundle\") pod \"calico-kube-controllers-67877fc7f5-bsvtq\" (UID: \"ae1ba4f6-1230-4757-8b1a-af9cfe7ac401\") " pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" Jan 23 19:30:10.309339 kubelet[2837]: I0123 19:30:10.309180 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rdb7\" (UniqueName: \"kubernetes.io/projected/ae1ba4f6-1230-4757-8b1a-af9cfe7ac401-kube-api-access-7rdb7\") pod \"calico-kube-controllers-67877fc7f5-bsvtq\" (UID: \"ae1ba4f6-1230-4757-8b1a-af9cfe7ac401\") " pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" Jan 23 19:30:10.309695 kubelet[2837]: I0123 19:30:10.309203 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ad2e315-a1c2-4385-9b78-2b5be4403617-config-volume\") pod \"coredns-674b8bbfcf-c2srk\" (UID: \"9ad2e315-a1c2-4385-9b78-2b5be4403617\") " pod="kube-system/coredns-674b8bbfcf-c2srk" Jan 23 19:30:10.365944 systemd[1]: Created slice kubepods-besteffort-podae1ba4f6_1230_4757_8b1a_af9cfe7ac401.slice - libcontainer container kubepods-besteffort-podae1ba4f6_1230_4757_8b1a_af9cfe7ac401.slice. Jan 23 19:30:10.411956 kubelet[2837]: I0123 19:30:10.410338 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8dbs\" (UniqueName: \"kubernetes.io/projected/ee8a7eda-e868-4533-ab49-9798effa7813-kube-api-access-t8dbs\") pod \"coredns-674b8bbfcf-wnmcb\" (UID: \"ee8a7eda-e868-4533-ab49-9798effa7813\") " pod="kube-system/coredns-674b8bbfcf-wnmcb" Jan 23 19:30:10.415931 kubelet[2837]: I0123 19:30:10.415898 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92099955-c310-4dc6-a23c-2c8c618bc3b8-goldmane-ca-bundle\") pod \"goldmane-666569f655-r9djg\" (UID: \"92099955-c310-4dc6-a23c-2c8c618bc3b8\") " pod="calico-system/goldmane-666569f655-r9djg" Jan 23 19:30:10.416092 kubelet[2837]: I0123 19:30:10.416070 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/92099955-c310-4dc6-a23c-2c8c618bc3b8-goldmane-key-pair\") pod \"goldmane-666569f655-r9djg\" (UID: \"92099955-c310-4dc6-a23c-2c8c618bc3b8\") " pod="calico-system/goldmane-666569f655-r9djg" Jan 23 19:30:10.416195 kubelet[2837]: I0123 19:30:10.416175 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqpf7\" (UniqueName: \"kubernetes.io/projected/87c1e199-aab6-487a-be60-3401d4797307-kube-api-access-mqpf7\") pod \"calico-apiserver-545cbc66db-fpfjb\" (UID: \"87c1e199-aab6-487a-be60-3401d4797307\") " pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" Jan 23 19:30:10.416567 kubelet[2837]: I0123 19:30:10.416544 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee8a7eda-e868-4533-ab49-9798effa7813-config-volume\") pod \"coredns-674b8bbfcf-wnmcb\" (UID: \"ee8a7eda-e868-4533-ab49-9798effa7813\") " pod="kube-system/coredns-674b8bbfcf-wnmcb" Jan 23 19:30:10.443193 systemd[1]: Created slice kubepods-burstable-pod9ad2e315_a1c2_4385_9b78_2b5be4403617.slice - libcontainer container kubepods-burstable-pod9ad2e315_a1c2_4385_9b78_2b5be4403617.slice. Jan 23 19:30:10.449779 kubelet[2837]: I0123 19:30:10.449703 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92099955-c310-4dc6-a23c-2c8c618bc3b8-config\") pod \"goldmane-666569f655-r9djg\" (UID: \"92099955-c310-4dc6-a23c-2c8c618bc3b8\") " pod="calico-system/goldmane-666569f655-r9djg" Jan 23 19:30:10.450084 kubelet[2837]: I0123 19:30:10.450057 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdtmz\" (UniqueName: \"kubernetes.io/projected/92099955-c310-4dc6-a23c-2c8c618bc3b8-kube-api-access-fdtmz\") pod \"goldmane-666569f655-r9djg\" (UID: \"92099955-c310-4dc6-a23c-2c8c618bc3b8\") " pod="calico-system/goldmane-666569f655-r9djg" Jan 23 19:30:10.450171 kubelet[2837]: I0123 19:30:10.450156 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/87c1e199-aab6-487a-be60-3401d4797307-calico-apiserver-certs\") pod \"calico-apiserver-545cbc66db-fpfjb\" (UID: \"87c1e199-aab6-487a-be60-3401d4797307\") " pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" Jan 23 19:30:10.450429 kubelet[2837]: I0123 19:30:10.450372 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7e93729c-bf84-4e82-98c5-e8561bff366f-whisker-backend-key-pair\") pod \"whisker-5d74d6c569-nznkz\" (UID: \"7e93729c-bf84-4e82-98c5-e8561bff366f\") " pod="calico-system/whisker-5d74d6c569-nznkz" Jan 23 19:30:10.450554 kubelet[2837]: I0123 19:30:10.450538 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2vqq\" (UniqueName: \"kubernetes.io/projected/7e93729c-bf84-4e82-98c5-e8561bff366f-kube-api-access-p2vqq\") pod \"whisker-5d74d6c569-nznkz\" (UID: \"7e93729c-bf84-4e82-98c5-e8561bff366f\") " pod="calico-system/whisker-5d74d6c569-nznkz" Jan 23 19:30:10.450743 kubelet[2837]: I0123 19:30:10.450727 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e93729c-bf84-4e82-98c5-e8561bff366f-whisker-ca-bundle\") pod \"whisker-5d74d6c569-nznkz\" (UID: \"7e93729c-bf84-4e82-98c5-e8561bff366f\") " pod="calico-system/whisker-5d74d6c569-nznkz" Jan 23 19:30:10.632631 containerd[1548]: time="2026-01-23T19:30:10.632096722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545cbc66db-s4jf2,Uid:87004552-13b2-409e-9fda-f933cdb145c9,Namespace:calico-apiserver,Attempt:0,}" Jan 23 19:30:10.640165 systemd[1]: Created slice kubepods-burstable-podee8a7eda_e868_4533_ab49_9798effa7813.slice - libcontainer container kubepods-burstable-podee8a7eda_e868_4533_ab49_9798effa7813.slice. Jan 23 19:30:10.652228 kubelet[2837]: E0123 19:30:10.652144 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:10.657970 containerd[1548]: time="2026-01-23T19:30:10.657925269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wnmcb,Uid:ee8a7eda-e868-4533-ab49-9798effa7813,Namespace:kube-system,Attempt:0,}" Jan 23 19:30:10.661261 systemd[1]: Created slice kubepods-besteffort-pod92099955_c310_4dc6_a23c_2c8c618bc3b8.slice - libcontainer container kubepods-besteffort-pod92099955_c310_4dc6_a23c_2c8c618bc3b8.slice. Jan 23 19:30:10.678807 containerd[1548]: time="2026-01-23T19:30:10.678437505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r9djg,Uid:92099955-c310-4dc6-a23c-2c8c618bc3b8,Namespace:calico-system,Attempt:0,}" Jan 23 19:30:10.699590 systemd[1]: Created slice kubepods-besteffort-pod87c1e199_aab6_487a_be60_3401d4797307.slice - libcontainer container kubepods-besteffort-pod87c1e199_aab6_487a_be60_3401d4797307.slice. Jan 23 19:30:10.727503 containerd[1548]: time="2026-01-23T19:30:10.727449443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545cbc66db-fpfjb,Uid:87c1e199-aab6-487a-be60-3401d4797307,Namespace:calico-apiserver,Attempt:0,}" Jan 23 19:30:10.727943 containerd[1548]: time="2026-01-23T19:30:10.727911885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67877fc7f5-bsvtq,Uid:ae1ba4f6-1230-4757-8b1a-af9cfe7ac401,Namespace:calico-system,Attempt:0,}" Jan 23 19:30:10.742635 systemd[1]: Created slice kubepods-besteffort-pod7e93729c_bf84_4e82_98c5_e8561bff366f.slice - libcontainer container kubepods-besteffort-pod7e93729c_bf84_4e82_98c5_e8561bff366f.slice. Jan 23 19:30:10.755127 containerd[1548]: time="2026-01-23T19:30:10.754677029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d74d6c569-nznkz,Uid:7e93729c-bf84-4e82-98c5-e8561bff366f,Namespace:calico-system,Attempt:0,}" Jan 23 19:30:10.845556 kubelet[2837]: E0123 19:30:10.843480 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:10.898765 containerd[1548]: time="2026-01-23T19:30:10.897695661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c2srk,Uid:9ad2e315-a1c2-4385-9b78-2b5be4403617,Namespace:kube-system,Attempt:0,}" Jan 23 19:30:10.902545 systemd[1]: Created slice kubepods-besteffort-pod7cbe68df_cea7_49bc_bbd7_253343631e45.slice - libcontainer container kubepods-besteffort-pod7cbe68df_cea7_49bc_bbd7_253343631e45.slice. Jan 23 19:30:10.903138 kubelet[2837]: E0123 19:30:10.903033 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:10.944824 containerd[1548]: time="2026-01-23T19:30:10.943370159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pspcd,Uid:7cbe68df-cea7-49bc-bbd7-253343631e45,Namespace:calico-system,Attempt:0,}" Jan 23 19:30:11.055876 containerd[1548]: time="2026-01-23T19:30:11.055823535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 19:30:11.244904 containerd[1548]: time="2026-01-23T19:30:11.244848247Z" level=error msg="Failed to destroy network for sandbox \"aaa3970d2b92f681ae18b991e703ac91ad965c0eba78d9d1d98ef890c5c5197f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.249653 systemd[1]: run-netns-cni\x2dbb808e49\x2df8b4\x2d805d\x2d06a5\x2d8d7c813380a4.mount: Deactivated successfully. Jan 23 19:30:11.262438 containerd[1548]: time="2026-01-23T19:30:11.262174463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545cbc66db-s4jf2,Uid:87004552-13b2-409e-9fda-f933cdb145c9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaa3970d2b92f681ae18b991e703ac91ad965c0eba78d9d1d98ef890c5c5197f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.298071 containerd[1548]: time="2026-01-23T19:30:11.298010468Z" level=error msg="Failed to destroy network for sandbox \"77c6d70e3ea1edbfe6933a516bbd9a6654f69b869cdd88e2ab6fbbe0b32df2c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.302106 systemd[1]: run-netns-cni\x2dafb90020\x2d29a1\x2dffc6\x2dea72\x2d70dedfcd33e0.mount: Deactivated successfully. Jan 23 19:30:11.320005 kubelet[2837]: E0123 19:30:11.316122 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaa3970d2b92f681ae18b991e703ac91ad965c0eba78d9d1d98ef890c5c5197f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.320005 kubelet[2837]: E0123 19:30:11.316244 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaa3970d2b92f681ae18b991e703ac91ad965c0eba78d9d1d98ef890c5c5197f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" Jan 23 19:30:11.320005 kubelet[2837]: E0123 19:30:11.316347 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaa3970d2b92f681ae18b991e703ac91ad965c0eba78d9d1d98ef890c5c5197f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" Jan 23 19:30:11.322961 kubelet[2837]: E0123 19:30:11.316940 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-545cbc66db-s4jf2_calico-apiserver(87004552-13b2-409e-9fda-f933cdb145c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-545cbc66db-s4jf2_calico-apiserver(87004552-13b2-409e-9fda-f933cdb145c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aaa3970d2b92f681ae18b991e703ac91ad965c0eba78d9d1d98ef890c5c5197f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:30:11.339827 containerd[1548]: time="2026-01-23T19:30:11.339380278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wnmcb,Uid:ee8a7eda-e868-4533-ab49-9798effa7813,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c6d70e3ea1edbfe6933a516bbd9a6654f69b869cdd88e2ab6fbbe0b32df2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.344174 kubelet[2837]: E0123 19:30:11.344046 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c6d70e3ea1edbfe6933a516bbd9a6654f69b869cdd88e2ab6fbbe0b32df2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.344174 kubelet[2837]: E0123 19:30:11.344160 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c6d70e3ea1edbfe6933a516bbd9a6654f69b869cdd88e2ab6fbbe0b32df2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wnmcb" Jan 23 19:30:11.344472 kubelet[2837]: E0123 19:30:11.344192 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c6d70e3ea1edbfe6933a516bbd9a6654f69b869cdd88e2ab6fbbe0b32df2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wnmcb" Jan 23 19:30:11.347332 kubelet[2837]: E0123 19:30:11.346961 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wnmcb_kube-system(ee8a7eda-e868-4533-ab49-9798effa7813)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wnmcb_kube-system(ee8a7eda-e868-4533-ab49-9798effa7813)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77c6d70e3ea1edbfe6933a516bbd9a6654f69b869cdd88e2ab6fbbe0b32df2c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wnmcb" podUID="ee8a7eda-e868-4533-ab49-9798effa7813" Jan 23 19:30:11.352637 containerd[1548]: time="2026-01-23T19:30:11.352579358Z" level=error msg="Failed to destroy network for sandbox \"5c707fdecbc5da90bd3f965a1e401cb9190bffdede6f5339902509db2677c7b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.361569 containerd[1548]: time="2026-01-23T19:30:11.361371294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r9djg,Uid:92099955-c310-4dc6-a23c-2c8c618bc3b8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c707fdecbc5da90bd3f965a1e401cb9190bffdede6f5339902509db2677c7b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.362523 kubelet[2837]: E0123 19:30:11.362358 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c707fdecbc5da90bd3f965a1e401cb9190bffdede6f5339902509db2677c7b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.362643 kubelet[2837]: E0123 19:30:11.362519 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c707fdecbc5da90bd3f965a1e401cb9190bffdede6f5339902509db2677c7b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-r9djg" Jan 23 19:30:11.362643 kubelet[2837]: E0123 19:30:11.362591 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c707fdecbc5da90bd3f965a1e401cb9190bffdede6f5339902509db2677c7b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-r9djg" Jan 23 19:30:11.362726 kubelet[2837]: E0123 19:30:11.362668 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-r9djg_calico-system(92099955-c310-4dc6-a23c-2c8c618bc3b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-r9djg_calico-system(92099955-c310-4dc6-a23c-2c8c618bc3b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c707fdecbc5da90bd3f965a1e401cb9190bffdede6f5339902509db2677c7b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:30:11.375693 containerd[1548]: time="2026-01-23T19:30:11.375033654Z" level=error msg="Failed to destroy network for sandbox \"86f4c9fd5dbfc5f685e088ac068b0254021c3363393f02039b050b4fac645571\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.376380 containerd[1548]: time="2026-01-23T19:30:11.375534668Z" level=error msg="Failed to destroy network for sandbox \"eb7dcb1878a8d53cf382b3252551d0648962d0371d649858310fddbf525aa651\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.379521 containerd[1548]: time="2026-01-23T19:30:11.378930106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545cbc66db-fpfjb,Uid:87c1e199-aab6-487a-be60-3401d4797307,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f4c9fd5dbfc5f685e088ac068b0254021c3363393f02039b050b4fac645571\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.379669 kubelet[2837]: E0123 19:30:11.379210 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f4c9fd5dbfc5f685e088ac068b0254021c3363393f02039b050b4fac645571\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.379669 kubelet[2837]: E0123 19:30:11.379639 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f4c9fd5dbfc5f685e088ac068b0254021c3363393f02039b050b4fac645571\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" Jan 23 19:30:11.379761 kubelet[2837]: E0123 19:30:11.379674 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f4c9fd5dbfc5f685e088ac068b0254021c3363393f02039b050b4fac645571\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" Jan 23 19:30:11.380135 kubelet[2837]: E0123 19:30:11.379900 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-545cbc66db-fpfjb_calico-apiserver(87c1e199-aab6-487a-be60-3401d4797307)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-545cbc66db-fpfjb_calico-apiserver(87c1e199-aab6-487a-be60-3401d4797307)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86f4c9fd5dbfc5f685e088ac068b0254021c3363393f02039b050b4fac645571\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:30:11.382180 containerd[1548]: time="2026-01-23T19:30:11.382073336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d74d6c569-nznkz,Uid:7e93729c-bf84-4e82-98c5-e8561bff366f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb7dcb1878a8d53cf382b3252551d0648962d0371d649858310fddbf525aa651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.382613 kubelet[2837]: E0123 19:30:11.382498 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb7dcb1878a8d53cf382b3252551d0648962d0371d649858310fddbf525aa651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.382613 kubelet[2837]: E0123 19:30:11.382571 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb7dcb1878a8d53cf382b3252551d0648962d0371d649858310fddbf525aa651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d74d6c569-nznkz" Jan 23 19:30:11.382613 kubelet[2837]: E0123 19:30:11.382601 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb7dcb1878a8d53cf382b3252551d0648962d0371d649858310fddbf525aa651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d74d6c569-nznkz" Jan 23 19:30:11.382963 kubelet[2837]: E0123 19:30:11.382651 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d74d6c569-nznkz_calico-system(7e93729c-bf84-4e82-98c5-e8561bff366f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d74d6c569-nznkz_calico-system(7e93729c-bf84-4e82-98c5-e8561bff366f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb7dcb1878a8d53cf382b3252551d0648962d0371d649858310fddbf525aa651\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d74d6c569-nznkz" podUID="7e93729c-bf84-4e82-98c5-e8561bff366f" Jan 23 19:30:11.406081 containerd[1548]: time="2026-01-23T19:30:11.404691787Z" level=error msg="Failed to destroy network for sandbox \"fc75ed3ac6fabfba1024f2639b0ba5590c58bd940977e17aa05a3a78f3a26302\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.428382 containerd[1548]: time="2026-01-23T19:30:11.428237035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c2srk,Uid:9ad2e315-a1c2-4385-9b78-2b5be4403617,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc75ed3ac6fabfba1024f2639b0ba5590c58bd940977e17aa05a3a78f3a26302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.430485 kubelet[2837]: E0123 19:30:11.429161 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc75ed3ac6fabfba1024f2639b0ba5590c58bd940977e17aa05a3a78f3a26302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.430485 kubelet[2837]: E0123 19:30:11.429239 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc75ed3ac6fabfba1024f2639b0ba5590c58bd940977e17aa05a3a78f3a26302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-c2srk" Jan 23 19:30:11.430485 kubelet[2837]: E0123 19:30:11.430198 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc75ed3ac6fabfba1024f2639b0ba5590c58bd940977e17aa05a3a78f3a26302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-c2srk" Jan 23 19:30:11.431080 kubelet[2837]: E0123 19:30:11.430713 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-c2srk_kube-system(9ad2e315-a1c2-4385-9b78-2b5be4403617)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-c2srk_kube-system(9ad2e315-a1c2-4385-9b78-2b5be4403617)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc75ed3ac6fabfba1024f2639b0ba5590c58bd940977e17aa05a3a78f3a26302\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-c2srk" podUID="9ad2e315-a1c2-4385-9b78-2b5be4403617" Jan 23 19:30:11.462377 containerd[1548]: time="2026-01-23T19:30:11.462313861Z" level=error msg="Failed to destroy network for sandbox \"60cab38b18cefb78ecbd8c199324ea95f5ddce13fba997609dcf6429f2bc260b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.468733 containerd[1548]: time="2026-01-23T19:30:11.467548643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67877fc7f5-bsvtq,Uid:ae1ba4f6-1230-4757-8b1a-af9cfe7ac401,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60cab38b18cefb78ecbd8c199324ea95f5ddce13fba997609dcf6429f2bc260b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.468981 kubelet[2837]: E0123 19:30:11.468150 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60cab38b18cefb78ecbd8c199324ea95f5ddce13fba997609dcf6429f2bc260b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.468981 kubelet[2837]: E0123 19:30:11.468375 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60cab38b18cefb78ecbd8c199324ea95f5ddce13fba997609dcf6429f2bc260b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" Jan 23 19:30:11.468981 kubelet[2837]: E0123 19:30:11.468505 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60cab38b18cefb78ecbd8c199324ea95f5ddce13fba997609dcf6429f2bc260b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" Jan 23 19:30:11.469108 kubelet[2837]: E0123 19:30:11.468772 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67877fc7f5-bsvtq_calico-system(ae1ba4f6-1230-4757-8b1a-af9cfe7ac401)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67877fc7f5-bsvtq_calico-system(ae1ba4f6-1230-4757-8b1a-af9cfe7ac401)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60cab38b18cefb78ecbd8c199324ea95f5ddce13fba997609dcf6429f2bc260b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:30:11.489145 containerd[1548]: time="2026-01-23T19:30:11.487924978Z" level=error msg="Failed to destroy network for sandbox \"7b2119e0a956d9b4c4f592704d08ad36e0e33a887a1b5c44a2ffa4a2b444ee90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.503842 containerd[1548]: time="2026-01-23T19:30:11.498756634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pspcd,Uid:7cbe68df-cea7-49bc-bbd7-253343631e45,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2119e0a956d9b4c4f592704d08ad36e0e33a887a1b5c44a2ffa4a2b444ee90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.504183 kubelet[2837]: E0123 19:30:11.499356 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2119e0a956d9b4c4f592704d08ad36e0e33a887a1b5c44a2ffa4a2b444ee90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:11.504183 kubelet[2837]: E0123 19:30:11.499508 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2119e0a956d9b4c4f592704d08ad36e0e33a887a1b5c44a2ffa4a2b444ee90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pspcd" Jan 23 19:30:11.504183 kubelet[2837]: E0123 19:30:11.499535 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2119e0a956d9b4c4f592704d08ad36e0e33a887a1b5c44a2ffa4a2b444ee90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pspcd" Jan 23 19:30:11.504802 kubelet[2837]: E0123 19:30:11.499622 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pspcd_calico-system(7cbe68df-cea7-49bc-bbd7-253343631e45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pspcd_calico-system(7cbe68df-cea7-49bc-bbd7-253343631e45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b2119e0a956d9b4c4f592704d08ad36e0e33a887a1b5c44a2ffa4a2b444ee90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:30:12.024623 systemd[1]: run-netns-cni\x2d50aa78dd\x2d0cfb\x2d1d5e\x2d7e07\x2d24f07b9b1c7b.mount: Deactivated successfully. Jan 23 19:30:12.024797 systemd[1]: run-netns-cni\x2d379df9eb\x2d56a3\x2d0353\x2d5086\x2d74b752de910f.mount: Deactivated successfully. Jan 23 19:30:12.024880 systemd[1]: run-netns-cni\x2db0dc7439\x2d84eb\x2de7d3\x2dc1ea\x2de730de2db285.mount: Deactivated successfully. Jan 23 19:30:12.024958 systemd[1]: run-netns-cni\x2d8c06279e\x2deabd\x2dc6a5\x2de8ff\x2dbc2e2dff65f6.mount: Deactivated successfully. Jan 23 19:30:12.025035 systemd[1]: run-netns-cni\x2da968a11b\x2d1fb3\x2d33b4\x2de47c\x2d31913ff62d8f.mount: Deactivated successfully. Jan 23 19:30:12.025111 systemd[1]: run-netns-cni\x2dfa4d11f7\x2d5a0b\x2d9764\x2db7a6\x2d4fd0d3728342.mount: Deactivated successfully. Jan 23 19:30:15.890939 kubelet[2837]: E0123 19:30:15.888640 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:16.884570 kubelet[2837]: E0123 19:30:16.884516 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:21.915442 kubelet[2837]: E0123 19:30:21.914881 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:21.919911 containerd[1548]: time="2026-01-23T19:30:21.915978700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67877fc7f5-bsvtq,Uid:ae1ba4f6-1230-4757-8b1a-af9cfe7ac401,Namespace:calico-system,Attempt:0,}" Jan 23 19:30:21.919911 containerd[1548]: time="2026-01-23T19:30:21.916699184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wnmcb,Uid:ee8a7eda-e868-4533-ab49-9798effa7813,Namespace:kube-system,Attempt:0,}" Jan 23 19:30:21.919911 containerd[1548]: time="2026-01-23T19:30:21.916992339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d74d6c569-nznkz,Uid:7e93729c-bf84-4e82-98c5-e8561bff366f,Namespace:calico-system,Attempt:0,}" Jan 23 19:30:22.289865 containerd[1548]: time="2026-01-23T19:30:22.289812328Z" level=error msg="Failed to destroy network for sandbox \"647e5ded2fc47b6c2bd6d8feb5b2451ebaaadfd400d03ef3191b50a58623d38b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:22.292030 containerd[1548]: time="2026-01-23T19:30:22.291538455Z" level=error msg="Failed to destroy network for sandbox \"91ce46b4fb6144e421633cc304ad73c7a7a0461251b022125b47c8a2937b09cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:22.298369 systemd[1]: run-netns-cni\x2d3d2ebe60\x2db7a9\x2d4800\x2d51f9\x2d6e81a7009c67.mount: Deactivated successfully. Jan 23 19:30:22.298590 systemd[1]: run-netns-cni\x2de1101c6a\x2dd284\x2d7b04\x2d4400\x2d263e862506bb.mount: Deactivated successfully. Jan 23 19:30:22.305393 containerd[1548]: time="2026-01-23T19:30:22.305137902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d74d6c569-nznkz,Uid:7e93729c-bf84-4e82-98c5-e8561bff366f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"647e5ded2fc47b6c2bd6d8feb5b2451ebaaadfd400d03ef3191b50a58623d38b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:22.308316 kubelet[2837]: E0123 19:30:22.307711 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"647e5ded2fc47b6c2bd6d8feb5b2451ebaaadfd400d03ef3191b50a58623d38b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:22.308316 kubelet[2837]: E0123 19:30:22.307789 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"647e5ded2fc47b6c2bd6d8feb5b2451ebaaadfd400d03ef3191b50a58623d38b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d74d6c569-nznkz" Jan 23 19:30:22.308316 kubelet[2837]: E0123 19:30:22.307946 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"647e5ded2fc47b6c2bd6d8feb5b2451ebaaadfd400d03ef3191b50a58623d38b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d74d6c569-nznkz" Jan 23 19:30:22.308559 kubelet[2837]: E0123 19:30:22.308014 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d74d6c569-nznkz_calico-system(7e93729c-bf84-4e82-98c5-e8561bff366f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d74d6c569-nznkz_calico-system(7e93729c-bf84-4e82-98c5-e8561bff366f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"647e5ded2fc47b6c2bd6d8feb5b2451ebaaadfd400d03ef3191b50a58623d38b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d74d6c569-nznkz" podUID="7e93729c-bf84-4e82-98c5-e8561bff366f" Jan 23 19:30:22.322634 containerd[1548]: time="2026-01-23T19:30:22.322538546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wnmcb,Uid:ee8a7eda-e868-4533-ab49-9798effa7813,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ce46b4fb6144e421633cc304ad73c7a7a0461251b022125b47c8a2937b09cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:22.323659 kubelet[2837]: E0123 19:30:22.323421 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ce46b4fb6144e421633cc304ad73c7a7a0461251b022125b47c8a2937b09cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:22.323659 kubelet[2837]: E0123 19:30:22.323568 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ce46b4fb6144e421633cc304ad73c7a7a0461251b022125b47c8a2937b09cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wnmcb" Jan 23 19:30:22.323659 kubelet[2837]: E0123 19:30:22.323609 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ce46b4fb6144e421633cc304ad73c7a7a0461251b022125b47c8a2937b09cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wnmcb" Jan 23 19:30:22.323843 kubelet[2837]: E0123 19:30:22.323679 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wnmcb_kube-system(ee8a7eda-e868-4533-ab49-9798effa7813)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wnmcb_kube-system(ee8a7eda-e868-4533-ab49-9798effa7813)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91ce46b4fb6144e421633cc304ad73c7a7a0461251b022125b47c8a2937b09cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wnmcb" podUID="ee8a7eda-e868-4533-ab49-9798effa7813" Jan 23 19:30:22.362689 containerd[1548]: time="2026-01-23T19:30:22.362531239Z" level=error msg="Failed to destroy network for sandbox \"96fa9a242d9a36ea97e45e714a7e2178497a3bcfeea0b234357d367d89b61430\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:22.372074 containerd[1548]: time="2026-01-23T19:30:22.372005867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67877fc7f5-bsvtq,Uid:ae1ba4f6-1230-4757-8b1a-af9cfe7ac401,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96fa9a242d9a36ea97e45e714a7e2178497a3bcfeea0b234357d367d89b61430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:22.374133 kubelet[2837]: E0123 19:30:22.373988 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96fa9a242d9a36ea97e45e714a7e2178497a3bcfeea0b234357d367d89b61430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:22.374133 kubelet[2837]: E0123 19:30:22.374063 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96fa9a242d9a36ea97e45e714a7e2178497a3bcfeea0b234357d367d89b61430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" Jan 23 19:30:22.374133 kubelet[2837]: E0123 19:30:22.374092 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96fa9a242d9a36ea97e45e714a7e2178497a3bcfeea0b234357d367d89b61430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" Jan 23 19:30:22.375389 kubelet[2837]: E0123 19:30:22.375331 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67877fc7f5-bsvtq_calico-system(ae1ba4f6-1230-4757-8b1a-af9cfe7ac401)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67877fc7f5-bsvtq_calico-system(ae1ba4f6-1230-4757-8b1a-af9cfe7ac401)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96fa9a242d9a36ea97e45e714a7e2178497a3bcfeea0b234357d367d89b61430\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:30:22.886070 containerd[1548]: time="2026-01-23T19:30:22.885952350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545cbc66db-fpfjb,Uid:87c1e199-aab6-487a-be60-3401d4797307,Namespace:calico-apiserver,Attempt:0,}" Jan 23 19:30:23.023209 systemd[1]: run-netns-cni\x2d58adceef\x2d8d9a\x2dde9e\x2dbf9f\x2d8068f2bf5f6a.mount: Deactivated successfully. Jan 23 19:30:23.264461 containerd[1548]: time="2026-01-23T19:30:23.257843052Z" level=error msg="Failed to destroy network for sandbox \"60ac10dd497dffb98d04275e16e5abc8430a4e5c69461b4faf79a9f3a74fa72a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:23.276245 systemd[1]: run-netns-cni\x2db7a4f8b7\x2d6192\x2db5cd\x2d959e\x2d95be46b87aee.mount: Deactivated successfully. Jan 23 19:30:23.313555 containerd[1548]: time="2026-01-23T19:30:23.313371318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545cbc66db-fpfjb,Uid:87c1e199-aab6-487a-be60-3401d4797307,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60ac10dd497dffb98d04275e16e5abc8430a4e5c69461b4faf79a9f3a74fa72a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:23.316574 kubelet[2837]: E0123 19:30:23.314516 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60ac10dd497dffb98d04275e16e5abc8430a4e5c69461b4faf79a9f3a74fa72a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:23.316574 kubelet[2837]: E0123 19:30:23.315215 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60ac10dd497dffb98d04275e16e5abc8430a4e5c69461b4faf79a9f3a74fa72a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" Jan 23 19:30:23.316574 kubelet[2837]: E0123 19:30:23.315257 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60ac10dd497dffb98d04275e16e5abc8430a4e5c69461b4faf79a9f3a74fa72a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" Jan 23 19:30:23.317256 kubelet[2837]: E0123 19:30:23.315392 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-545cbc66db-fpfjb_calico-apiserver(87c1e199-aab6-487a-be60-3401d4797307)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-545cbc66db-fpfjb_calico-apiserver(87c1e199-aab6-487a-be60-3401d4797307)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60ac10dd497dffb98d04275e16e5abc8430a4e5c69461b4faf79a9f3a74fa72a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:30:23.905010 kubelet[2837]: E0123 19:30:23.899843 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:23.905440 containerd[1548]: time="2026-01-23T19:30:23.902087334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pspcd,Uid:7cbe68df-cea7-49bc-bbd7-253343631e45,Namespace:calico-system,Attempt:0,}" Jan 23 19:30:23.905440 containerd[1548]: time="2026-01-23T19:30:23.903372019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c2srk,Uid:9ad2e315-a1c2-4385-9b78-2b5be4403617,Namespace:kube-system,Attempt:0,}" Jan 23 19:30:24.258117 containerd[1548]: time="2026-01-23T19:30:24.258056484Z" level=error msg="Failed to destroy network for sandbox \"eb1dd04abd6a61ceb909acaab0022095b680f64dc8e9ca59ee5c634e0dfb47a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:24.272017 containerd[1548]: time="2026-01-23T19:30:24.271956551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c2srk,Uid:9ad2e315-a1c2-4385-9b78-2b5be4403617,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1dd04abd6a61ceb909acaab0022095b680f64dc8e9ca59ee5c634e0dfb47a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:24.273796 systemd[1]: run-netns-cni\x2d35dfe5c3\x2d2597\x2dde73\x2da543\x2d9863d6b1ee62.mount: Deactivated successfully. Jan 23 19:30:24.277083 kubelet[2837]: E0123 19:30:24.274663 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1dd04abd6a61ceb909acaab0022095b680f64dc8e9ca59ee5c634e0dfb47a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:24.277083 kubelet[2837]: E0123 19:30:24.274758 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1dd04abd6a61ceb909acaab0022095b680f64dc8e9ca59ee5c634e0dfb47a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-c2srk" Jan 23 19:30:24.277083 kubelet[2837]: E0123 19:30:24.274792 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1dd04abd6a61ceb909acaab0022095b680f64dc8e9ca59ee5c634e0dfb47a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-c2srk" Jan 23 19:30:24.277335 kubelet[2837]: E0123 19:30:24.274940 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-c2srk_kube-system(9ad2e315-a1c2-4385-9b78-2b5be4403617)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-c2srk_kube-system(9ad2e315-a1c2-4385-9b78-2b5be4403617)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb1dd04abd6a61ceb909acaab0022095b680f64dc8e9ca59ee5c634e0dfb47a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-c2srk" podUID="9ad2e315-a1c2-4385-9b78-2b5be4403617" Jan 23 19:30:24.308474 containerd[1548]: time="2026-01-23T19:30:24.304947675Z" level=error msg="Failed to destroy network for sandbox \"f7e8940fd84fd7d3da660d477d00ec45642bcfb7e287098cb3e7826fc6fa786a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:24.325944 systemd[1]: run-netns-cni\x2d42ccbd28\x2d5518\x2dc5c2\x2dbe2b\x2d30ea319ac06a.mount: Deactivated successfully. Jan 23 19:30:24.338418 containerd[1548]: time="2026-01-23T19:30:24.338164354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pspcd,Uid:7cbe68df-cea7-49bc-bbd7-253343631e45,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7e8940fd84fd7d3da660d477d00ec45642bcfb7e287098cb3e7826fc6fa786a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:24.339920 kubelet[2837]: E0123 19:30:24.339857 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7e8940fd84fd7d3da660d477d00ec45642bcfb7e287098cb3e7826fc6fa786a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:24.341765 kubelet[2837]: E0123 19:30:24.339931 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7e8940fd84fd7d3da660d477d00ec45642bcfb7e287098cb3e7826fc6fa786a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pspcd" Jan 23 19:30:24.341765 kubelet[2837]: E0123 19:30:24.339959 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7e8940fd84fd7d3da660d477d00ec45642bcfb7e287098cb3e7826fc6fa786a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pspcd" Jan 23 19:30:24.341765 kubelet[2837]: E0123 19:30:24.340060 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pspcd_calico-system(7cbe68df-cea7-49bc-bbd7-253343631e45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pspcd_calico-system(7cbe68df-cea7-49bc-bbd7-253343631e45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7e8940fd84fd7d3da660d477d00ec45642bcfb7e287098cb3e7826fc6fa786a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:30:26.885613 kubelet[2837]: E0123 19:30:26.885015 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:26.889055 containerd[1548]: time="2026-01-23T19:30:26.888962242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r9djg,Uid:92099955-c310-4dc6-a23c-2c8c618bc3b8,Namespace:calico-system,Attempt:0,}" Jan 23 19:30:26.889900 containerd[1548]: time="2026-01-23T19:30:26.889405859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545cbc66db-s4jf2,Uid:87004552-13b2-409e-9fda-f933cdb145c9,Namespace:calico-apiserver,Attempt:0,}" Jan 23 19:30:27.129070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1037642871.mount: Deactivated successfully. Jan 23 19:30:27.155948 containerd[1548]: time="2026-01-23T19:30:27.154603441Z" level=error msg="Failed to destroy network for sandbox \"fe468fe4e02ffecb1961b0e28081e609d21cf6dde7be47ae5b2bb3162abd6e4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:27.165206 systemd[1]: run-netns-cni\x2d77a25042\x2d3654\x2d372e\x2d5c2a\x2d74c00e920f61.mount: Deactivated successfully. Jan 23 19:30:27.195398 containerd[1548]: time="2026-01-23T19:30:27.194642707Z" level=error msg="Failed to destroy network for sandbox \"53ddcd101aca7ae4bb87565600af2d0430ccef597bfa5084777b9824f736f163\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:27.199680 systemd[1]: run-netns-cni\x2dfbb211ca\x2dd8c5\x2da6cc\x2dd9e9\x2dadd4a9d1d2b6.mount: Deactivated successfully. Jan 23 19:30:27.213617 containerd[1548]: time="2026-01-23T19:30:27.213202134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545cbc66db-s4jf2,Uid:87004552-13b2-409e-9fda-f933cdb145c9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe468fe4e02ffecb1961b0e28081e609d21cf6dde7be47ae5b2bb3162abd6e4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:27.214336 kubelet[2837]: E0123 19:30:27.214226 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe468fe4e02ffecb1961b0e28081e609d21cf6dde7be47ae5b2bb3162abd6e4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:27.214436 kubelet[2837]: E0123 19:30:27.214379 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe468fe4e02ffecb1961b0e28081e609d21cf6dde7be47ae5b2bb3162abd6e4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" Jan 23 19:30:27.214489 kubelet[2837]: E0123 19:30:27.214417 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe468fe4e02ffecb1961b0e28081e609d21cf6dde7be47ae5b2bb3162abd6e4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" Jan 23 19:30:27.214643 kubelet[2837]: E0123 19:30:27.214572 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-545cbc66db-s4jf2_calico-apiserver(87004552-13b2-409e-9fda-f933cdb145c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-545cbc66db-s4jf2_calico-apiserver(87004552-13b2-409e-9fda-f933cdb145c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe468fe4e02ffecb1961b0e28081e609d21cf6dde7be47ae5b2bb3162abd6e4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:30:27.221118 containerd[1548]: time="2026-01-23T19:30:27.220933771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r9djg,Uid:92099955-c310-4dc6-a23c-2c8c618bc3b8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ddcd101aca7ae4bb87565600af2d0430ccef597bfa5084777b9824f736f163\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:27.221954 kubelet[2837]: E0123 19:30:27.221737 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ddcd101aca7ae4bb87565600af2d0430ccef597bfa5084777b9824f736f163\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:30:27.221954 kubelet[2837]: E0123 19:30:27.221815 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ddcd101aca7ae4bb87565600af2d0430ccef597bfa5084777b9824f736f163\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-r9djg" Jan 23 19:30:27.221954 kubelet[2837]: E0123 19:30:27.221847 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ddcd101aca7ae4bb87565600af2d0430ccef597bfa5084777b9824f736f163\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-r9djg" Jan 23 19:30:27.222061 kubelet[2837]: E0123 19:30:27.221898 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-r9djg_calico-system(92099955-c310-4dc6-a23c-2c8c618bc3b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-r9djg_calico-system(92099955-c310-4dc6-a23c-2c8c618bc3b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53ddcd101aca7ae4bb87565600af2d0430ccef597bfa5084777b9824f736f163\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:30:27.246235 containerd[1548]: time="2026-01-23T19:30:27.245851945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:30:27.248550 containerd[1548]: time="2026-01-23T19:30:27.248474514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 19:30:27.251186 containerd[1548]: time="2026-01-23T19:30:27.251132647Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:30:27.263842 containerd[1548]: time="2026-01-23T19:30:27.263411132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 16.207531752s" Jan 23 19:30:27.263842 containerd[1548]: time="2026-01-23T19:30:27.263467086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 19:30:27.264133 containerd[1548]: time="2026-01-23T19:30:27.263596749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:30:27.367220 containerd[1548]: time="2026-01-23T19:30:27.366047733Z" level=info msg="CreateContainer within sandbox \"2063169b095afda498d3d98fc8070af33fafd372fa9c1a83baafabcaeebb4b68\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 19:30:27.402713 containerd[1548]: time="2026-01-23T19:30:27.400436697Z" level=info msg="Container 8c57ffe58c8429e0b04cddec0bbc33c9a82178f02c9351d9c9ec00a1743628aa: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:30:27.451372 containerd[1548]: time="2026-01-23T19:30:27.451059429Z" level=info msg="CreateContainer within sandbox \"2063169b095afda498d3d98fc8070af33fafd372fa9c1a83baafabcaeebb4b68\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8c57ffe58c8429e0b04cddec0bbc33c9a82178f02c9351d9c9ec00a1743628aa\"" Jan 23 19:30:27.459920 containerd[1548]: time="2026-01-23T19:30:27.459844102Z" level=info msg="StartContainer for \"8c57ffe58c8429e0b04cddec0bbc33c9a82178f02c9351d9c9ec00a1743628aa\"" Jan 23 19:30:27.484811 containerd[1548]: time="2026-01-23T19:30:27.484697014Z" level=info msg="connecting to shim 8c57ffe58c8429e0b04cddec0bbc33c9a82178f02c9351d9c9ec00a1743628aa" address="unix:///run/containerd/s/709ad41456b962e80eeaac78d46c7801cb64484d81ddb826626e7ff2c7cc018d" protocol=ttrpc version=3 Jan 23 19:30:27.539387 systemd[1]: Started cri-containerd-8c57ffe58c8429e0b04cddec0bbc33c9a82178f02c9351d9c9ec00a1743628aa.scope - libcontainer container 8c57ffe58c8429e0b04cddec0bbc33c9a82178f02c9351d9c9ec00a1743628aa. Jan 23 19:30:27.753350 containerd[1548]: time="2026-01-23T19:30:27.752128742Z" level=info msg="StartContainer for \"8c57ffe58c8429e0b04cddec0bbc33c9a82178f02c9351d9c9ec00a1743628aa\" returns successfully" Jan 23 19:30:28.018742 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 19:30:28.021581 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 19:30:28.287400 kubelet[2837]: E0123 19:30:28.286889 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:28.368240 kubelet[2837]: I0123 19:30:28.368101 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-68ckt" podStartSLOduration=2.525305219 podStartE2EDuration="42.368086715s" podCreationTimestamp="2026-01-23 19:29:46 +0000 UTC" firstStartedPulling="2026-01-23 19:29:47.446783482 +0000 UTC m=+44.124323325" lastFinishedPulling="2026-01-23 19:30:27.289564988 +0000 UTC m=+83.967104821" observedRunningTime="2026-01-23 19:30:28.362136375 +0000 UTC m=+85.039676228" watchObservedRunningTime="2026-01-23 19:30:28.368086715 +0000 UTC m=+85.045626548" Jan 23 19:30:28.545497 kubelet[2837]: I0123 19:30:28.542951 2837 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e93729c-bf84-4e82-98c5-e8561bff366f-whisker-ca-bundle\") pod \"7e93729c-bf84-4e82-98c5-e8561bff366f\" (UID: \"7e93729c-bf84-4e82-98c5-e8561bff366f\") " Jan 23 19:30:28.545497 kubelet[2837]: I0123 19:30:28.543023 2837 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7e93729c-bf84-4e82-98c5-e8561bff366f-whisker-backend-key-pair\") pod \"7e93729c-bf84-4e82-98c5-e8561bff366f\" (UID: \"7e93729c-bf84-4e82-98c5-e8561bff366f\") " Jan 23 19:30:28.545497 kubelet[2837]: I0123 19:30:28.543067 2837 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2vqq\" (UniqueName: \"kubernetes.io/projected/7e93729c-bf84-4e82-98c5-e8561bff366f-kube-api-access-p2vqq\") pod \"7e93729c-bf84-4e82-98c5-e8561bff366f\" (UID: \"7e93729c-bf84-4e82-98c5-e8561bff366f\") " Jan 23 19:30:28.545497 kubelet[2837]: I0123 19:30:28.544180 2837 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e93729c-bf84-4e82-98c5-e8561bff366f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7e93729c-bf84-4e82-98c5-e8561bff366f" (UID: "7e93729c-bf84-4e82-98c5-e8561bff366f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 19:30:28.554085 systemd[1]: var-lib-kubelet-pods-7e93729c\x2dbf84\x2d4e82\x2d98c5\x2de8561bff366f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp2vqq.mount: Deactivated successfully. Jan 23 19:30:28.556658 kubelet[2837]: I0123 19:30:28.556580 2837 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e93729c-bf84-4e82-98c5-e8561bff366f-kube-api-access-p2vqq" (OuterVolumeSpecName: "kube-api-access-p2vqq") pod "7e93729c-bf84-4e82-98c5-e8561bff366f" (UID: "7e93729c-bf84-4e82-98c5-e8561bff366f"). InnerVolumeSpecName "kube-api-access-p2vqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:30:28.565046 kubelet[2837]: I0123 19:30:28.564971 2837 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e93729c-bf84-4e82-98c5-e8561bff366f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7e93729c-bf84-4e82-98c5-e8561bff366f" (UID: "7e93729c-bf84-4e82-98c5-e8561bff366f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 19:30:28.566808 systemd[1]: var-lib-kubelet-pods-7e93729c\x2dbf84\x2d4e82\x2d98c5\x2de8561bff366f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 19:30:28.644334 kubelet[2837]: I0123 19:30:28.644179 2837 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e93729c-bf84-4e82-98c5-e8561bff366f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 23 19:30:28.644334 kubelet[2837]: I0123 19:30:28.644230 2837 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7e93729c-bf84-4e82-98c5-e8561bff366f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 23 19:30:28.644334 kubelet[2837]: I0123 19:30:28.644248 2837 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p2vqq\" (UniqueName: \"kubernetes.io/projected/7e93729c-bf84-4e82-98c5-e8561bff366f-kube-api-access-p2vqq\") on node \"localhost\" DevicePath \"\"" Jan 23 19:30:29.314384 kubelet[2837]: E0123 19:30:29.311603 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:29.352659 systemd[1]: Removed slice kubepods-besteffort-pod7e93729c_bf84_4e82_98c5_e8561bff366f.slice - libcontainer container kubepods-besteffort-pod7e93729c_bf84_4e82_98c5_e8561bff366f.slice. Jan 23 19:30:29.838069 systemd[1]: Created slice kubepods-besteffort-pod93f1f488_26e4_4256_ae3b_355d056de5e6.slice - libcontainer container kubepods-besteffort-pod93f1f488_26e4_4256_ae3b_355d056de5e6.slice. Jan 23 19:30:29.889374 kubelet[2837]: I0123 19:30:29.889112 2837 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e93729c-bf84-4e82-98c5-e8561bff366f" path="/var/lib/kubelet/pods/7e93729c-bf84-4e82-98c5-e8561bff366f/volumes" Jan 23 19:30:29.974668 kubelet[2837]: I0123 19:30:29.973196 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/93f1f488-26e4-4256-ae3b-355d056de5e6-whisker-backend-key-pair\") pod \"whisker-77958bf869-kjvtc\" (UID: \"93f1f488-26e4-4256-ae3b-355d056de5e6\") " pod="calico-system/whisker-77958bf869-kjvtc" Jan 23 19:30:29.974668 kubelet[2837]: I0123 19:30:29.973328 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93f1f488-26e4-4256-ae3b-355d056de5e6-whisker-ca-bundle\") pod \"whisker-77958bf869-kjvtc\" (UID: \"93f1f488-26e4-4256-ae3b-355d056de5e6\") " pod="calico-system/whisker-77958bf869-kjvtc" Jan 23 19:30:29.974668 kubelet[2837]: I0123 19:30:29.973391 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v55kf\" (UniqueName: \"kubernetes.io/projected/93f1f488-26e4-4256-ae3b-355d056de5e6-kube-api-access-v55kf\") pod \"whisker-77958bf869-kjvtc\" (UID: \"93f1f488-26e4-4256-ae3b-355d056de5e6\") " pod="calico-system/whisker-77958bf869-kjvtc" Jan 23 19:30:30.146637 containerd[1548]: time="2026-01-23T19:30:30.145800215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77958bf869-kjvtc,Uid:93f1f488-26e4-4256-ae3b-355d056de5e6,Namespace:calico-system,Attempt:0,}" Jan 23 19:30:30.902081 systemd-networkd[1457]: cali8e7e5538aab: Link UP Jan 23 19:30:30.905135 systemd-networkd[1457]: cali8e7e5538aab: Gained carrier Jan 23 19:30:30.983903 containerd[1548]: 2026-01-23 19:30:30.321 [INFO][4430] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 19:30:30.983903 containerd[1548]: 2026-01-23 19:30:30.396 [INFO][4430] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--77958bf869--kjvtc-eth0 whisker-77958bf869- calico-system 93f1f488-26e4-4256-ae3b-355d056de5e6 1102 0 2026-01-23 19:30:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77958bf869 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-77958bf869-kjvtc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8e7e5538aab [] [] }} ContainerID="c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" Namespace="calico-system" Pod="whisker-77958bf869-kjvtc" WorkloadEndpoint="localhost-k8s-whisker--77958bf869--kjvtc-" Jan 23 19:30:30.983903 containerd[1548]: 2026-01-23 19:30:30.396 [INFO][4430] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" Namespace="calico-system" Pod="whisker-77958bf869-kjvtc" WorkloadEndpoint="localhost-k8s-whisker--77958bf869--kjvtc-eth0" Jan 23 19:30:30.983903 containerd[1548]: 2026-01-23 19:30:30.710 [INFO][4455] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" HandleID="k8s-pod-network.c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" Workload="localhost-k8s-whisker--77958bf869--kjvtc-eth0" Jan 23 19:30:30.984324 containerd[1548]: 2026-01-23 19:30:30.713 [INFO][4455] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" HandleID="k8s-pod-network.c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" Workload="localhost-k8s-whisker--77958bf869--kjvtc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000140280), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-77958bf869-kjvtc", "timestamp":"2026-01-23 19:30:30.710964442 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:30:30.984324 containerd[1548]: 2026-01-23 19:30:30.713 [INFO][4455] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:30:30.984324 containerd[1548]: 2026-01-23 19:30:30.714 [INFO][4455] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:30:30.984324 containerd[1548]: 2026-01-23 19:30:30.714 [INFO][4455] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 19:30:30.984324 containerd[1548]: 2026-01-23 19:30:30.742 [INFO][4455] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" host="localhost" Jan 23 19:30:30.984324 containerd[1548]: 2026-01-23 19:30:30.767 [INFO][4455] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 19:30:30.984324 containerd[1548]: 2026-01-23 19:30:30.787 [INFO][4455] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 19:30:30.984324 containerd[1548]: 2026-01-23 19:30:30.795 [INFO][4455] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:30.984324 containerd[1548]: 2026-01-23 19:30:30.801 [INFO][4455] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:30.984324 containerd[1548]: 2026-01-23 19:30:30.801 [INFO][4455] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" host="localhost" Jan 23 19:30:30.984806 containerd[1548]: 2026-01-23 19:30:30.806 [INFO][4455] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b Jan 23 19:30:30.984806 containerd[1548]: 2026-01-23 19:30:30.833 [INFO][4455] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" host="localhost" Jan 23 19:30:30.984806 containerd[1548]: 2026-01-23 19:30:30.855 [INFO][4455] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" host="localhost" Jan 23 19:30:30.984806 containerd[1548]: 2026-01-23 19:30:30.855 [INFO][4455] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" host="localhost" Jan 23 19:30:30.984806 containerd[1548]: 2026-01-23 19:30:30.855 [INFO][4455] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:30:30.984806 containerd[1548]: 2026-01-23 19:30:30.855 [INFO][4455] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" HandleID="k8s-pod-network.c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" Workload="localhost-k8s-whisker--77958bf869--kjvtc-eth0" Jan 23 19:30:30.984987 containerd[1548]: 2026-01-23 19:30:30.861 [INFO][4430] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" Namespace="calico-system" Pod="whisker-77958bf869-kjvtc" WorkloadEndpoint="localhost-k8s-whisker--77958bf869--kjvtc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77958bf869--kjvtc-eth0", GenerateName:"whisker-77958bf869-", Namespace:"calico-system", SelfLink:"", UID:"93f1f488-26e4-4256-ae3b-355d056de5e6", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77958bf869", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-77958bf869-kjvtc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8e7e5538aab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:30.984987 containerd[1548]: 2026-01-23 19:30:30.861 [INFO][4430] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" Namespace="calico-system" Pod="whisker-77958bf869-kjvtc" WorkloadEndpoint="localhost-k8s-whisker--77958bf869--kjvtc-eth0" Jan 23 19:30:30.985112 containerd[1548]: 2026-01-23 19:30:30.861 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e7e5538aab ContainerID="c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" Namespace="calico-system" Pod="whisker-77958bf869-kjvtc" WorkloadEndpoint="localhost-k8s-whisker--77958bf869--kjvtc-eth0" Jan 23 19:30:30.985112 containerd[1548]: 2026-01-23 19:30:30.906 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" Namespace="calico-system" Pod="whisker-77958bf869-kjvtc" WorkloadEndpoint="localhost-k8s-whisker--77958bf869--kjvtc-eth0" Jan 23 19:30:30.985169 containerd[1548]: 2026-01-23 19:30:30.908 [INFO][4430] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" Namespace="calico-system" Pod="whisker-77958bf869-kjvtc" WorkloadEndpoint="localhost-k8s-whisker--77958bf869--kjvtc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77958bf869--kjvtc-eth0", GenerateName:"whisker-77958bf869-", Namespace:"calico-system", SelfLink:"", UID:"93f1f488-26e4-4256-ae3b-355d056de5e6", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77958bf869", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b", Pod:"whisker-77958bf869-kjvtc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8e7e5538aab", MAC:"ba:b0:d5:a1:cb:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:30.985319 containerd[1548]: 2026-01-23 19:30:30.972 [INFO][4430] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" Namespace="calico-system" Pod="whisker-77958bf869-kjvtc" WorkloadEndpoint="localhost-k8s-whisker--77958bf869--kjvtc-eth0" Jan 23 19:30:31.169478 systemd-networkd[1457]: vxlan.calico: Link UP Jan 23 19:30:31.169491 systemd-networkd[1457]: vxlan.calico: Gained carrier Jan 23 19:30:31.242719 containerd[1548]: time="2026-01-23T19:30:31.242423062Z" level=info msg="connecting to shim c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b" address="unix:///run/containerd/s/d0db66261617c6b2106fb7971e0eb2b43acce5a5869d07d1c01605b57ab4452d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:30:31.334568 systemd[1]: Started cri-containerd-c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b.scope - libcontainer container c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b. Jan 23 19:30:31.364370 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:30:31.446682 containerd[1548]: time="2026-01-23T19:30:31.445656421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77958bf869-kjvtc,Uid:93f1f488-26e4-4256-ae3b-355d056de5e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4136a0aa8476d82d119dd4b9a763cedb1ba3955b3372fdeb494e8aa928b401b\"" Jan 23 19:30:31.451351 containerd[1548]: time="2026-01-23T19:30:31.451115195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 19:30:31.524829 containerd[1548]: time="2026-01-23T19:30:31.524471080Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:31.529040 containerd[1548]: time="2026-01-23T19:30:31.529000899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 19:30:31.545616 containerd[1548]: time="2026-01-23T19:30:31.544663300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 19:30:31.548425 kubelet[2837]: E0123 19:30:31.548355 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:30:31.548425 kubelet[2837]: E0123 19:30:31.548425 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:30:31.549082 kubelet[2837]: E0123 19:30:31.548684 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7a483d5077ea476891ae22814fc1300c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v55kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77958bf869-kjvtc_calico-system(93f1f488-26e4-4256-ae3b-355d056de5e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:31.551657 containerd[1548]: time="2026-01-23T19:30:31.551219509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 19:30:31.658663 containerd[1548]: time="2026-01-23T19:30:31.658258425Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:31.661826 containerd[1548]: time="2026-01-23T19:30:31.661482775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 19:30:31.666195 kubelet[2837]: E0123 19:30:31.662257 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:30:31.666195 kubelet[2837]: E0123 19:30:31.662375 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:30:31.666408 kubelet[2837]: E0123 19:30:31.662583 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v55kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77958bf869-kjvtc_calico-system(93f1f488-26e4-4256-ae3b-355d056de5e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:31.666408 kubelet[2837]: E0123 19:30:31.664150 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:30:31.689996 containerd[1548]: time="2026-01-23T19:30:31.661630881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 19:30:32.341166 kubelet[2837]: E0123 19:30:32.341104 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:30:32.359526 systemd-networkd[1457]: cali8e7e5538aab: Gained IPv6LL Jan 23 19:30:32.732747 systemd-networkd[1457]: vxlan.calico: Gained IPv6LL Jan 23 19:30:32.887677 kubelet[2837]: E0123 19:30:32.883804 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:32.888344 containerd[1548]: time="2026-01-23T19:30:32.885593763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wnmcb,Uid:ee8a7eda-e868-4533-ab49-9798effa7813,Namespace:kube-system,Attempt:0,}" Jan 23 19:30:33.170022 systemd-networkd[1457]: calia6a09c42438: Link UP Jan 23 19:30:33.170379 systemd-networkd[1457]: calia6a09c42438: Gained carrier Jan 23 19:30:33.218431 containerd[1548]: 2026-01-23 19:30:32.987 [INFO][4617] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--wnmcb-eth0 coredns-674b8bbfcf- kube-system ee8a7eda-e868-4533-ab49-9798effa7813 994 0 2026-01-23 19:29:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-wnmcb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia6a09c42438 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" Namespace="kube-system" Pod="coredns-674b8bbfcf-wnmcb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wnmcb-" Jan 23 19:30:33.218431 containerd[1548]: 2026-01-23 19:30:32.987 [INFO][4617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" Namespace="kube-system" Pod="coredns-674b8bbfcf-wnmcb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wnmcb-eth0" Jan 23 19:30:33.218431 containerd[1548]: 2026-01-23 19:30:33.048 [INFO][4631] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" HandleID="k8s-pod-network.bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" Workload="localhost-k8s-coredns--674b8bbfcf--wnmcb-eth0" Jan 23 19:30:33.218823 containerd[1548]: 2026-01-23 19:30:33.048 [INFO][4631] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" HandleID="k8s-pod-network.bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" Workload="localhost-k8s-coredns--674b8bbfcf--wnmcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f570), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-wnmcb", "timestamp":"2026-01-23 19:30:33.048411897 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:30:33.218823 containerd[1548]: 2026-01-23 19:30:33.049 [INFO][4631] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:30:33.218823 containerd[1548]: 2026-01-23 19:30:33.049 [INFO][4631] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:30:33.218823 containerd[1548]: 2026-01-23 19:30:33.049 [INFO][4631] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 19:30:33.218823 containerd[1548]: 2026-01-23 19:30:33.064 [INFO][4631] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" host="localhost" Jan 23 19:30:33.218823 containerd[1548]: 2026-01-23 19:30:33.081 [INFO][4631] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 19:30:33.218823 containerd[1548]: 2026-01-23 19:30:33.096 [INFO][4631] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 19:30:33.218823 containerd[1548]: 2026-01-23 19:30:33.106 [INFO][4631] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:33.218823 containerd[1548]: 2026-01-23 19:30:33.111 [INFO][4631] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:33.218823 containerd[1548]: 2026-01-23 19:30:33.111 [INFO][4631] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" host="localhost" Jan 23 19:30:33.219204 containerd[1548]: 2026-01-23 19:30:33.115 [INFO][4631] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206 Jan 23 19:30:33.219204 containerd[1548]: 2026-01-23 19:30:33.130 [INFO][4631] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" host="localhost" Jan 23 19:30:33.219204 containerd[1548]: 2026-01-23 19:30:33.156 [INFO][4631] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" host="localhost" Jan 23 19:30:33.219204 containerd[1548]: 2026-01-23 19:30:33.156 [INFO][4631] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" host="localhost" Jan 23 19:30:33.219204 containerd[1548]: 2026-01-23 19:30:33.156 [INFO][4631] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:30:33.219204 containerd[1548]: 2026-01-23 19:30:33.156 [INFO][4631] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" HandleID="k8s-pod-network.bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" Workload="localhost-k8s-coredns--674b8bbfcf--wnmcb-eth0" Jan 23 19:30:33.219535 containerd[1548]: 2026-01-23 19:30:33.163 [INFO][4617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" Namespace="kube-system" Pod="coredns-674b8bbfcf-wnmcb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wnmcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wnmcb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ee8a7eda-e868-4533-ab49-9798effa7813", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-wnmcb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6a09c42438", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:33.219705 containerd[1548]: 2026-01-23 19:30:33.163 [INFO][4617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" Namespace="kube-system" Pod="coredns-674b8bbfcf-wnmcb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wnmcb-eth0" Jan 23 19:30:33.219705 containerd[1548]: 2026-01-23 19:30:33.163 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6a09c42438 ContainerID="bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" Namespace="kube-system" Pod="coredns-674b8bbfcf-wnmcb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wnmcb-eth0" Jan 23 19:30:33.219705 containerd[1548]: 2026-01-23 19:30:33.174 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" Namespace="kube-system" Pod="coredns-674b8bbfcf-wnmcb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wnmcb-eth0" Jan 23 19:30:33.219827 containerd[1548]: 2026-01-23 19:30:33.179 [INFO][4617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" Namespace="kube-system" Pod="coredns-674b8bbfcf-wnmcb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wnmcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wnmcb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ee8a7eda-e868-4533-ab49-9798effa7813", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206", Pod:"coredns-674b8bbfcf-wnmcb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6a09c42438", MAC:"7e:9a:cb:0a:c1:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:33.219827 containerd[1548]: 2026-01-23 19:30:33.204 [INFO][4617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" Namespace="kube-system" Pod="coredns-674b8bbfcf-wnmcb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wnmcb-eth0" Jan 23 19:30:33.303868 containerd[1548]: time="2026-01-23T19:30:33.303782816Z" level=info msg="connecting to shim bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206" address="unix:///run/containerd/s/f283d9ccaaf2ee1d39e3849a78103708076621cb8e42495f25bb12c70abb25fe" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:30:33.353017 kubelet[2837]: E0123 19:30:33.352962 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:30:33.384417 systemd[1]: Started cri-containerd-bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206.scope - libcontainer container bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206. Jan 23 19:30:33.459640 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:30:33.550407 containerd[1548]: time="2026-01-23T19:30:33.550004025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wnmcb,Uid:ee8a7eda-e868-4533-ab49-9798effa7813,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206\"" Jan 23 19:30:33.556330 kubelet[2837]: E0123 19:30:33.554861 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:33.573604 containerd[1548]: time="2026-01-23T19:30:33.573241031Z" level=info msg="CreateContainer within sandbox \"bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:30:33.625586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount449395044.mount: Deactivated successfully. Jan 23 19:30:33.637015 containerd[1548]: time="2026-01-23T19:30:33.636890601Z" level=info msg="Container cc1249941b62bccd5ac1e5c52d4b03464325568f83666f5052329743bac493b9: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:30:33.663462 containerd[1548]: time="2026-01-23T19:30:33.661391638Z" level=info msg="CreateContainer within sandbox \"bf7df58d1e1c4b9d6d5d4f415327cdb04fd5a33625e02346ac6b74b306f75206\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc1249941b62bccd5ac1e5c52d4b03464325568f83666f5052329743bac493b9\"" Jan 23 19:30:33.666579 containerd[1548]: time="2026-01-23T19:30:33.665209530Z" level=info msg="StartContainer for \"cc1249941b62bccd5ac1e5c52d4b03464325568f83666f5052329743bac493b9\"" Jan 23 19:30:33.667461 containerd[1548]: time="2026-01-23T19:30:33.667433715Z" level=info msg="connecting to shim cc1249941b62bccd5ac1e5c52d4b03464325568f83666f5052329743bac493b9" address="unix:///run/containerd/s/f283d9ccaaf2ee1d39e3849a78103708076621cb8e42495f25bb12c70abb25fe" protocol=ttrpc version=3 Jan 23 19:30:33.723059 systemd[1]: Started cri-containerd-cc1249941b62bccd5ac1e5c52d4b03464325568f83666f5052329743bac493b9.scope - libcontainer container cc1249941b62bccd5ac1e5c52d4b03464325568f83666f5052329743bac493b9. Jan 23 19:30:33.823414 containerd[1548]: time="2026-01-23T19:30:33.822347116Z" level=info msg="StartContainer for \"cc1249941b62bccd5ac1e5c52d4b03464325568f83666f5052329743bac493b9\" returns successfully" Jan 23 19:30:34.354127 kubelet[2837]: E0123 19:30:34.353985 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:34.393929 kubelet[2837]: I0123 19:30:34.391491 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wnmcb" podStartSLOduration=88.391467213 podStartE2EDuration="1m28.391467213s" podCreationTimestamp="2026-01-23 19:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:30:34.389446591 +0000 UTC m=+91.066986444" watchObservedRunningTime="2026-01-23 19:30:34.391467213 +0000 UTC m=+91.069007056" Jan 23 19:30:34.887821 kubelet[2837]: E0123 19:30:34.886650 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:34.888123 containerd[1548]: time="2026-01-23T19:30:34.888012831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c2srk,Uid:9ad2e315-a1c2-4385-9b78-2b5be4403617,Namespace:kube-system,Attempt:0,}" Jan 23 19:30:34.891855 containerd[1548]: time="2026-01-23T19:30:34.890237024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67877fc7f5-bsvtq,Uid:ae1ba4f6-1230-4757-8b1a-af9cfe7ac401,Namespace:calico-system,Attempt:0,}" Jan 23 19:30:35.165648 systemd-networkd[1457]: calia6a09c42438: Gained IPv6LL Jan 23 19:30:35.193190 systemd-networkd[1457]: cali6c3ff11065b: Link UP Jan 23 19:30:35.196085 systemd-networkd[1457]: cali6c3ff11065b: Gained carrier Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.014 [INFO][4744] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67877fc7f5--bsvtq-eth0 calico-kube-controllers-67877fc7f5- calico-system ae1ba4f6-1230-4757-8b1a-af9cfe7ac401 998 0 2026-01-23 19:29:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67877fc7f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67877fc7f5-bsvtq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6c3ff11065b [] [] }} ContainerID="789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" Namespace="calico-system" Pod="calico-kube-controllers-67877fc7f5-bsvtq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67877fc7f5--bsvtq-" Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.015 [INFO][4744] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" Namespace="calico-system" Pod="calico-kube-controllers-67877fc7f5-bsvtq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67877fc7f5--bsvtq-eth0" Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.067 [INFO][4761] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" HandleID="k8s-pod-network.789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" Workload="localhost-k8s-calico--kube--controllers--67877fc7f5--bsvtq-eth0" Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.068 [INFO][4761] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" HandleID="k8s-pod-network.789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" Workload="localhost-k8s-calico--kube--controllers--67877fc7f5--bsvtq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67877fc7f5-bsvtq", "timestamp":"2026-01-23 19:30:35.067897444 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.068 [INFO][4761] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.068 [INFO][4761] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.068 [INFO][4761] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.088 [INFO][4761] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" host="localhost" Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.108 [INFO][4761] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.121 [INFO][4761] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.126 [INFO][4761] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.133 [INFO][4761] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.133 [INFO][4761] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" host="localhost" Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.136 [INFO][4761] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62 Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.145 [INFO][4761] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" host="localhost" Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.164 [INFO][4761] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" host="localhost" Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.165 [INFO][4761] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" host="localhost" Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.165 [INFO][4761] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:30:35.228403 containerd[1548]: 2026-01-23 19:30:35.166 [INFO][4761] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" HandleID="k8s-pod-network.789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" Workload="localhost-k8s-calico--kube--controllers--67877fc7f5--bsvtq-eth0" Jan 23 19:30:35.230063 containerd[1548]: 2026-01-23 19:30:35.174 [INFO][4744] cni-plugin/k8s.go 418: Populated endpoint ContainerID="789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" Namespace="calico-system" Pod="calico-kube-controllers-67877fc7f5-bsvtq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67877fc7f5--bsvtq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67877fc7f5--bsvtq-eth0", GenerateName:"calico-kube-controllers-67877fc7f5-", Namespace:"calico-system", SelfLink:"", UID:"ae1ba4f6-1230-4757-8b1a-af9cfe7ac401", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 29, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67877fc7f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67877fc7f5-bsvtq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c3ff11065b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:35.230063 containerd[1548]: 2026-01-23 19:30:35.174 [INFO][4744] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" Namespace="calico-system" Pod="calico-kube-controllers-67877fc7f5-bsvtq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67877fc7f5--bsvtq-eth0" Jan 23 19:30:35.230063 containerd[1548]: 2026-01-23 19:30:35.174 [INFO][4744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c3ff11065b ContainerID="789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" Namespace="calico-system" Pod="calico-kube-controllers-67877fc7f5-bsvtq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67877fc7f5--bsvtq-eth0" Jan 23 19:30:35.230063 containerd[1548]: 2026-01-23 19:30:35.197 [INFO][4744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" Namespace="calico-system" Pod="calico-kube-controllers-67877fc7f5-bsvtq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67877fc7f5--bsvtq-eth0" Jan 23 19:30:35.230063 containerd[1548]: 2026-01-23 19:30:35.198 [INFO][4744] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" Namespace="calico-system" Pod="calico-kube-controllers-67877fc7f5-bsvtq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67877fc7f5--bsvtq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67877fc7f5--bsvtq-eth0", GenerateName:"calico-kube-controllers-67877fc7f5-", Namespace:"calico-system", SelfLink:"", UID:"ae1ba4f6-1230-4757-8b1a-af9cfe7ac401", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 29, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67877fc7f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62", Pod:"calico-kube-controllers-67877fc7f5-bsvtq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c3ff11065b", MAC:"ce:db:a0:dc:94:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:35.230063 containerd[1548]: 2026-01-23 19:30:35.223 [INFO][4744] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" Namespace="calico-system" Pod="calico-kube-controllers-67877fc7f5-bsvtq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67877fc7f5--bsvtq-eth0" Jan 23 19:30:35.319052 containerd[1548]: time="2026-01-23T19:30:35.318998075Z" level=info msg="connecting to shim 789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62" address="unix:///run/containerd/s/0a912c9ca0801b683d5098b034a39a9661248eff16244781362df293d7671858" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:30:35.334359 systemd-networkd[1457]: caliae0b2d4eee9: Link UP Jan 23 19:30:35.335900 systemd-networkd[1457]: caliae0b2d4eee9: Gained carrier Jan 23 19:30:35.368697 kubelet[2837]: E0123 19:30:35.362093 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.035 [INFO][4734] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--c2srk-eth0 coredns-674b8bbfcf- kube-system 9ad2e315-a1c2-4385-9b78-2b5be4403617 991 0 2026-01-23 19:29:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-c2srk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliae0b2d4eee9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-c2srk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c2srk-" Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.036 [INFO][4734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-c2srk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c2srk-eth0" Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.103 [INFO][4766] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" HandleID="k8s-pod-network.a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" Workload="localhost-k8s-coredns--674b8bbfcf--c2srk-eth0" Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.104 [INFO][4766] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" HandleID="k8s-pod-network.a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" Workload="localhost-k8s-coredns--674b8bbfcf--c2srk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000490ea0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-c2srk", "timestamp":"2026-01-23 19:30:35.103921734 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.104 [INFO][4766] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.168 [INFO][4766] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.169 [INFO][4766] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.194 [INFO][4766] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" host="localhost" Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.230 [INFO][4766] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.258 [INFO][4766] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.262 [INFO][4766] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.272 [INFO][4766] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.272 [INFO][4766] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" host="localhost" Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.279 [INFO][4766] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0 Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.295 [INFO][4766] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" host="localhost" Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.319 [INFO][4766] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" host="localhost" Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.319 [INFO][4766] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" host="localhost" Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.319 [INFO][4766] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:30:35.378124 containerd[1548]: 2026-01-23 19:30:35.319 [INFO][4766] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" HandleID="k8s-pod-network.a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" Workload="localhost-k8s-coredns--674b8bbfcf--c2srk-eth0" Jan 23 19:30:35.379395 containerd[1548]: 2026-01-23 19:30:35.329 [INFO][4734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-c2srk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c2srk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--c2srk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9ad2e315-a1c2-4385-9b78-2b5be4403617", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-c2srk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae0b2d4eee9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:35.379395 containerd[1548]: 2026-01-23 19:30:35.329 [INFO][4734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-c2srk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c2srk-eth0" Jan 23 19:30:35.379395 containerd[1548]: 2026-01-23 19:30:35.329 [INFO][4734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae0b2d4eee9 ContainerID="a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-c2srk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c2srk-eth0" Jan 23 19:30:35.379395 containerd[1548]: 2026-01-23 19:30:35.337 [INFO][4734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-c2srk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c2srk-eth0" Jan 23 19:30:35.379395 containerd[1548]: 2026-01-23 19:30:35.339 [INFO][4734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-c2srk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c2srk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--c2srk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9ad2e315-a1c2-4385-9b78-2b5be4403617", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0", Pod:"coredns-674b8bbfcf-c2srk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae0b2d4eee9", MAC:"06:3b:ef:93:e4:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:35.379395 containerd[1548]: 2026-01-23 19:30:35.369 [INFO][4734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-c2srk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c2srk-eth0" Jan 23 19:30:35.393078 systemd[1]: Started cri-containerd-789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62.scope - libcontainer container 789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62. Jan 23 19:30:35.443218 containerd[1548]: time="2026-01-23T19:30:35.442936037Z" level=info msg="connecting to shim a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0" address="unix:///run/containerd/s/82ce866acd7a61f0cfa91ab7a17bb0afd1d33e9dcd0c4ec307c2570b10f7805b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:30:35.443539 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:30:35.507333 systemd[1]: Started cri-containerd-a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0.scope - libcontainer container a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0. Jan 23 19:30:35.552444 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:30:35.566165 containerd[1548]: time="2026-01-23T19:30:35.566097475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67877fc7f5-bsvtq,Uid:ae1ba4f6-1230-4757-8b1a-af9cfe7ac401,Namespace:calico-system,Attempt:0,} returns sandbox id \"789720a083f470822e051096b85483149839b53e2a2c595769f48622c8d4ee62\"" Jan 23 19:30:35.570410 containerd[1548]: time="2026-01-23T19:30:35.570186607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 19:30:35.639022 containerd[1548]: time="2026-01-23T19:30:35.638979960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c2srk,Uid:9ad2e315-a1c2-4385-9b78-2b5be4403617,Namespace:kube-system,Attempt:0,} returns sandbox id \"a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0\"" Jan 23 19:30:35.642983 kubelet[2837]: E0123 19:30:35.642752 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:35.652686 containerd[1548]: time="2026-01-23T19:30:35.652512330Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:35.653009 containerd[1548]: time="2026-01-23T19:30:35.652980165Z" level=info msg="CreateContainer within sandbox \"a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:30:35.655860 containerd[1548]: time="2026-01-23T19:30:35.655776403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 19:30:35.656467 containerd[1548]: time="2026-01-23T19:30:35.655880207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 19:30:35.657706 kubelet[2837]: E0123 19:30:35.657247 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:30:35.657706 kubelet[2837]: E0123 19:30:35.657368 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:30:35.657706 kubelet[2837]: E0123 19:30:35.657614 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67877fc7f5-bsvtq_calico-system(ae1ba4f6-1230-4757-8b1a-af9cfe7ac401): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:35.659110 kubelet[2837]: E0123 19:30:35.658821 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:30:35.701725 containerd[1548]: time="2026-01-23T19:30:35.700903905Z" level=info msg="Container 8edb17432c980f73e432424061a026e588bbf6e839cbd37908fee87e0bd97505: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:30:35.725663 containerd[1548]: time="2026-01-23T19:30:35.725366581Z" level=info msg="CreateContainer within sandbox \"a53f93b0c4146ab6929e817b301a2592fb21524ddbeea9bf304cfbff4beeabf0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8edb17432c980f73e432424061a026e588bbf6e839cbd37908fee87e0bd97505\"" Jan 23 19:30:35.728778 containerd[1548]: time="2026-01-23T19:30:35.727060045Z" level=info msg="StartContainer for \"8edb17432c980f73e432424061a026e588bbf6e839cbd37908fee87e0bd97505\"" Jan 23 19:30:35.729183 containerd[1548]: time="2026-01-23T19:30:35.729033571Z" level=info msg="connecting to shim 8edb17432c980f73e432424061a026e588bbf6e839cbd37908fee87e0bd97505" address="unix:///run/containerd/s/82ce866acd7a61f0cfa91ab7a17bb0afd1d33e9dcd0c4ec307c2570b10f7805b" protocol=ttrpc version=3 Jan 23 19:30:35.785383 systemd[1]: Started cri-containerd-8edb17432c980f73e432424061a026e588bbf6e839cbd37908fee87e0bd97505.scope - libcontainer container 8edb17432c980f73e432424061a026e588bbf6e839cbd37908fee87e0bd97505. Jan 23 19:30:35.904455 containerd[1548]: time="2026-01-23T19:30:35.904338118Z" level=info msg="StartContainer for \"8edb17432c980f73e432424061a026e588bbf6e839cbd37908fee87e0bd97505\" returns successfully" Jan 23 19:30:36.374472 kubelet[2837]: E0123 19:30:36.373073 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:36.382005 kubelet[2837]: E0123 19:30:36.381454 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:36.383634 kubelet[2837]: E0123 19:30:36.383485 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:30:36.458336 kubelet[2837]: I0123 19:30:36.458050 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-c2srk" podStartSLOduration=90.458024223 podStartE2EDuration="1m30.458024223s" podCreationTimestamp="2026-01-23 19:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:30:36.412487501 +0000 UTC m=+93.090027354" watchObservedRunningTime="2026-01-23 19:30:36.458024223 +0000 UTC m=+93.135564057" Jan 23 19:30:36.884839 containerd[1548]: time="2026-01-23T19:30:36.884330601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545cbc66db-fpfjb,Uid:87c1e199-aab6-487a-be60-3401d4797307,Namespace:calico-apiserver,Attempt:0,}" Jan 23 19:30:37.088722 systemd-networkd[1457]: caliae0b2d4eee9: Gained IPv6LL Jan 23 19:30:37.151445 systemd-networkd[1457]: cali6c3ff11065b: Gained IPv6LL Jan 23 19:30:37.167818 systemd-networkd[1457]: cali150e459c057: Link UP Jan 23 19:30:37.171774 systemd-networkd[1457]: cali150e459c057: Gained carrier Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:36.962 [INFO][4935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--545cbc66db--fpfjb-eth0 calico-apiserver-545cbc66db- calico-apiserver 87c1e199-aab6-487a-be60-3401d4797307 999 0 2026-01-23 19:29:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:545cbc66db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-545cbc66db-fpfjb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali150e459c057 [] [] }} ContainerID="3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-fpfjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--fpfjb-" Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:36.964 [INFO][4935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-fpfjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--fpfjb-eth0" Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.023 [INFO][4949] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" HandleID="k8s-pod-network.3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" Workload="localhost-k8s-calico--apiserver--545cbc66db--fpfjb-eth0" Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.024 [INFO][4949] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" HandleID="k8s-pod-network.3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" Workload="localhost-k8s-calico--apiserver--545cbc66db--fpfjb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c16e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-545cbc66db-fpfjb", "timestamp":"2026-01-23 19:30:37.023789428 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.024 [INFO][4949] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.024 [INFO][4949] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.024 [INFO][4949] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.044 [INFO][4949] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" host="localhost" Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.064 [INFO][4949] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.081 [INFO][4949] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.090 [INFO][4949] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.100 [INFO][4949] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.100 [INFO][4949] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" host="localhost" Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.109 [INFO][4949] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559 Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.129 [INFO][4949] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" host="localhost" Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.148 [INFO][4949] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" host="localhost" Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.148 [INFO][4949] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" host="localhost" Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.148 [INFO][4949] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:30:37.219342 containerd[1548]: 2026-01-23 19:30:37.148 [INFO][4949] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" HandleID="k8s-pod-network.3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" Workload="localhost-k8s-calico--apiserver--545cbc66db--fpfjb-eth0" Jan 23 19:30:37.224333 containerd[1548]: 2026-01-23 19:30:37.158 [INFO][4935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-fpfjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--fpfjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--545cbc66db--fpfjb-eth0", GenerateName:"calico-apiserver-545cbc66db-", Namespace:"calico-apiserver", SelfLink:"", UID:"87c1e199-aab6-487a-be60-3401d4797307", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"545cbc66db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-545cbc66db-fpfjb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali150e459c057", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:37.224333 containerd[1548]: 2026-01-23 19:30:37.159 [INFO][4935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-fpfjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--fpfjb-eth0" Jan 23 19:30:37.224333 containerd[1548]: 2026-01-23 19:30:37.159 [INFO][4935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali150e459c057 ContainerID="3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-fpfjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--fpfjb-eth0" Jan 23 19:30:37.224333 containerd[1548]: 2026-01-23 19:30:37.171 [INFO][4935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-fpfjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--fpfjb-eth0" Jan 23 19:30:37.224333 containerd[1548]: 2026-01-23 19:30:37.173 [INFO][4935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-fpfjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--fpfjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--545cbc66db--fpfjb-eth0", GenerateName:"calico-apiserver-545cbc66db-", Namespace:"calico-apiserver", SelfLink:"", UID:"87c1e199-aab6-487a-be60-3401d4797307", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"545cbc66db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559", Pod:"calico-apiserver-545cbc66db-fpfjb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali150e459c057", MAC:"3e:20:d0:81:b2:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:37.224333 containerd[1548]: 2026-01-23 19:30:37.198 [INFO][4935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-fpfjb" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--fpfjb-eth0" Jan 23 19:30:37.312994 containerd[1548]: time="2026-01-23T19:30:37.312469696Z" level=info msg="connecting to shim 3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559" address="unix:///run/containerd/s/a4264fe5c77f929cc6f2ededfdb8093bafc002ca3fca0129b01535be1823b63f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:30:37.399993 systemd[1]: Started cri-containerd-3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559.scope - libcontainer container 3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559. Jan 23 19:30:37.406954 kubelet[2837]: E0123 19:30:37.402910 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:37.418120 kubelet[2837]: E0123 19:30:37.417467 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:30:37.487064 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:30:37.629158 containerd[1548]: time="2026-01-23T19:30:37.629013152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545cbc66db-fpfjb,Uid:87c1e199-aab6-487a-be60-3401d4797307,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3fb152ab6045f2db21ff94058edb4b2af5cd433c7aa55fff256655dd9b1e4559\"" Jan 23 19:30:37.632454 containerd[1548]: time="2026-01-23T19:30:37.632413269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:30:37.702834 containerd[1548]: time="2026-01-23T19:30:37.702205143Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:37.711235 containerd[1548]: time="2026-01-23T19:30:37.711024281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:30:37.711235 containerd[1548]: time="2026-01-23T19:30:37.711175633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:30:37.711731 kubelet[2837]: E0123 19:30:37.711683 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:30:37.711889 kubelet[2837]: E0123 19:30:37.711863 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:30:37.712182 kubelet[2837]: E0123 19:30:37.712126 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqpf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545cbc66db-fpfjb_calico-apiserver(87c1e199-aab6-487a-be60-3401d4797307): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:37.718725 kubelet[2837]: E0123 19:30:37.718514 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:30:38.374502 systemd-networkd[1457]: cali150e459c057: Gained IPv6LL Jan 23 19:30:38.415742 kubelet[2837]: E0123 19:30:38.415689 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:30:38.887645 kubelet[2837]: E0123 19:30:38.884086 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:39.427622 kubelet[2837]: E0123 19:30:39.427461 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:30:39.885215 containerd[1548]: time="2026-01-23T19:30:39.884710126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pspcd,Uid:7cbe68df-cea7-49bc-bbd7-253343631e45,Namespace:calico-system,Attempt:0,}" Jan 23 19:30:39.886009 containerd[1548]: time="2026-01-23T19:30:39.885842387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545cbc66db-s4jf2,Uid:87004552-13b2-409e-9fda-f933cdb145c9,Namespace:calico-apiserver,Attempt:0,}" Jan 23 19:30:40.218453 systemd-networkd[1457]: cali94323df8abd: Link UP Jan 23 19:30:40.234641 systemd-networkd[1457]: cali94323df8abd: Gained carrier Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:39.986 [INFO][5017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--545cbc66db--s4jf2-eth0 calico-apiserver-545cbc66db- calico-apiserver 87004552-13b2-409e-9fda-f933cdb145c9 988 0 2026-01-23 19:29:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:545cbc66db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-545cbc66db-s4jf2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali94323df8abd [] [] }} ContainerID="16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-s4jf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--s4jf2-" Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:39.987 [INFO][5017] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-s4jf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--s4jf2-eth0" Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.040 [INFO][5050] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" HandleID="k8s-pod-network.16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" Workload="localhost-k8s-calico--apiserver--545cbc66db--s4jf2-eth0" Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.041 [INFO][5050] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" HandleID="k8s-pod-network.16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" Workload="localhost-k8s-calico--apiserver--545cbc66db--s4jf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-545cbc66db-s4jf2", "timestamp":"2026-01-23 19:30:40.040044921 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.041 [INFO][5050] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.041 [INFO][5050] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.041 [INFO][5050] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.060 [INFO][5050] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" host="localhost" Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.078 [INFO][5050] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.097 [INFO][5050] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.107 [INFO][5050] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.122 [INFO][5050] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.123 [INFO][5050] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" host="localhost" Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.165 [INFO][5050] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.176 [INFO][5050] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" host="localhost" Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.197 [INFO][5050] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" host="localhost" Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.198 [INFO][5050] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" host="localhost" Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.198 [INFO][5050] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:30:40.288492 containerd[1548]: 2026-01-23 19:30:40.198 [INFO][5050] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" HandleID="k8s-pod-network.16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" Workload="localhost-k8s-calico--apiserver--545cbc66db--s4jf2-eth0" Jan 23 19:30:40.289689 containerd[1548]: 2026-01-23 19:30:40.210 [INFO][5017] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-s4jf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--s4jf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--545cbc66db--s4jf2-eth0", GenerateName:"calico-apiserver-545cbc66db-", Namespace:"calico-apiserver", SelfLink:"", UID:"87004552-13b2-409e-9fda-f933cdb145c9", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"545cbc66db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-545cbc66db-s4jf2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94323df8abd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:40.289689 containerd[1548]: 2026-01-23 19:30:40.210 [INFO][5017] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-s4jf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--s4jf2-eth0" Jan 23 19:30:40.289689 containerd[1548]: 2026-01-23 19:30:40.210 [INFO][5017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94323df8abd ContainerID="16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-s4jf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--s4jf2-eth0" Jan 23 19:30:40.289689 containerd[1548]: 2026-01-23 19:30:40.229 [INFO][5017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-s4jf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--s4jf2-eth0" Jan 23 19:30:40.289689 containerd[1548]: 2026-01-23 19:30:40.233 [INFO][5017] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-s4jf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--s4jf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--545cbc66db--s4jf2-eth0", GenerateName:"calico-apiserver-545cbc66db-", Namespace:"calico-apiserver", SelfLink:"", UID:"87004552-13b2-409e-9fda-f933cdb145c9", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"545cbc66db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed", Pod:"calico-apiserver-545cbc66db-s4jf2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94323df8abd", MAC:"5a:d4:50:5e:de:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:40.289689 containerd[1548]: 2026-01-23 19:30:40.282 [INFO][5017] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" Namespace="calico-apiserver" Pod="calico-apiserver-545cbc66db-s4jf2" WorkloadEndpoint="localhost-k8s-calico--apiserver--545cbc66db--s4jf2-eth0" Jan 23 19:30:40.341884 systemd-networkd[1457]: cali14ab4894580: Link UP Jan 23 19:30:40.347126 systemd-networkd[1457]: cali14ab4894580: Gained carrier Jan 23 19:30:40.387447 containerd[1548]: time="2026-01-23T19:30:40.387185456Z" level=info msg="connecting to shim 16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed" address="unix:///run/containerd/s/9c40caf9a1aad78e783f26ff32b361e625721b1a5bdc615c77ce5cce46236a97" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:39.983 [INFO][5015] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pspcd-eth0 csi-node-driver- calico-system 7cbe68df-cea7-49bc-bbd7-253343631e45 854 0 2026-01-23 19:29:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pspcd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali14ab4894580 [] [] }} ContainerID="a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" Namespace="calico-system" Pod="csi-node-driver-pspcd" WorkloadEndpoint="localhost-k8s-csi--node--driver--pspcd-" Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:39.983 [INFO][5015] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" Namespace="calico-system" Pod="csi-node-driver-pspcd" WorkloadEndpoint="localhost-k8s-csi--node--driver--pspcd-eth0" Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.051 [INFO][5047] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" HandleID="k8s-pod-network.a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" Workload="localhost-k8s-csi--node--driver--pspcd-eth0" Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.051 [INFO][5047] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" HandleID="k8s-pod-network.a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" Workload="localhost-k8s-csi--node--driver--pspcd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad0d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pspcd", "timestamp":"2026-01-23 19:30:40.051074085 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.051 [INFO][5047] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.199 [INFO][5047] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.199 [INFO][5047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.225 [INFO][5047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" host="localhost" Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.256 [INFO][5047] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.279 [INFO][5047] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.285 [INFO][5047] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.295 [INFO][5047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.295 [INFO][5047] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" host="localhost" Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.304 [INFO][5047] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6 Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.315 [INFO][5047] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" host="localhost" Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.329 [INFO][5047] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" host="localhost" Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.329 [INFO][5047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" host="localhost" Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.329 [INFO][5047] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:30:40.389352 containerd[1548]: 2026-01-23 19:30:40.329 [INFO][5047] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" HandleID="k8s-pod-network.a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" Workload="localhost-k8s-csi--node--driver--pspcd-eth0" Jan 23 19:30:40.391245 containerd[1548]: 2026-01-23 19:30:40.335 [INFO][5015] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" Namespace="calico-system" Pod="csi-node-driver-pspcd" WorkloadEndpoint="localhost-k8s-csi--node--driver--pspcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pspcd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cbe68df-cea7-49bc-bbd7-253343631e45", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 29, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pspcd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali14ab4894580", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:40.391245 containerd[1548]: 2026-01-23 19:30:40.335 [INFO][5015] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" Namespace="calico-system" Pod="csi-node-driver-pspcd" WorkloadEndpoint="localhost-k8s-csi--node--driver--pspcd-eth0" Jan 23 19:30:40.391245 containerd[1548]: 2026-01-23 19:30:40.335 [INFO][5015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14ab4894580 ContainerID="a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" Namespace="calico-system" Pod="csi-node-driver-pspcd" WorkloadEndpoint="localhost-k8s-csi--node--driver--pspcd-eth0" Jan 23 19:30:40.391245 containerd[1548]: 2026-01-23 19:30:40.349 [INFO][5015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" Namespace="calico-system" Pod="csi-node-driver-pspcd" WorkloadEndpoint="localhost-k8s-csi--node--driver--pspcd-eth0" Jan 23 19:30:40.391245 containerd[1548]: 2026-01-23 19:30:40.350 [INFO][5015] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" Namespace="calico-system" Pod="csi-node-driver-pspcd" WorkloadEndpoint="localhost-k8s-csi--node--driver--pspcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pspcd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cbe68df-cea7-49bc-bbd7-253343631e45", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 29, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6", Pod:"csi-node-driver-pspcd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali14ab4894580", MAC:"96:dd:c8:94:29:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:40.391245 containerd[1548]: 2026-01-23 19:30:40.379 [INFO][5015] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" Namespace="calico-system" Pod="csi-node-driver-pspcd" WorkloadEndpoint="localhost-k8s-csi--node--driver--pspcd-eth0" Jan 23 19:30:40.459560 systemd[1]: Started cri-containerd-16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed.scope - libcontainer container 16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed. Jan 23 19:30:40.477365 containerd[1548]: time="2026-01-23T19:30:40.476988388Z" level=info msg="connecting to shim a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6" address="unix:///run/containerd/s/b9c24bcff7dbb794944768195728899842ec4b985830a9ec7f0a1e174f0c0019" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:30:40.520990 systemd[1]: Started cri-containerd-a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6.scope - libcontainer container a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6. Jan 23 19:30:40.533986 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:30:40.553796 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:30:40.612516 containerd[1548]: time="2026-01-23T19:30:40.609933221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pspcd,Uid:7cbe68df-cea7-49bc-bbd7-253343631e45,Namespace:calico-system,Attempt:0,} returns sandbox id \"a68c792e7e1dd2ef0c3b55620630c75e216f46aed422588c34eea4344ff3b3a6\"" Jan 23 19:30:40.612516 containerd[1548]: time="2026-01-23T19:30:40.611969786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 19:30:40.638484 containerd[1548]: time="2026-01-23T19:30:40.638321106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545cbc66db-s4jf2,Uid:87004552-13b2-409e-9fda-f933cdb145c9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"16b0916ebd1d9b7463c5d7771b62e2eb034c836209be306217bf19c5f21c48ed\"" Jan 23 19:30:40.695687 containerd[1548]: time="2026-01-23T19:30:40.695032241Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:40.699855 containerd[1548]: time="2026-01-23T19:30:40.699463339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 19:30:40.699855 containerd[1548]: time="2026-01-23T19:30:40.699574738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 19:30:40.700771 kubelet[2837]: E0123 19:30:40.700371 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:30:40.700771 kubelet[2837]: E0123 19:30:40.700465 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:30:40.701488 kubelet[2837]: E0123 19:30:40.701077 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvkh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pspcd_calico-system(7cbe68df-cea7-49bc-bbd7-253343631e45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:40.702796 containerd[1548]: time="2026-01-23T19:30:40.702517142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:30:40.787395 containerd[1548]: time="2026-01-23T19:30:40.786844857Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:40.790424 containerd[1548]: time="2026-01-23T19:30:40.790230667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:30:40.790424 containerd[1548]: time="2026-01-23T19:30:40.790375312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:30:40.790910 kubelet[2837]: E0123 19:30:40.790853 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:30:40.791218 kubelet[2837]: E0123 19:30:40.790931 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:30:40.792031 kubelet[2837]: E0123 19:30:40.791953 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljmkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545cbc66db-s4jf2_calico-apiserver(87004552-13b2-409e-9fda-f933cdb145c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:40.792253 containerd[1548]: time="2026-01-23T19:30:40.792220728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 19:30:40.794965 kubelet[2837]: E0123 19:30:40.794422 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:30:40.875127 containerd[1548]: time="2026-01-23T19:30:40.875046130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:40.878351 containerd[1548]: time="2026-01-23T19:30:40.878159951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 19:30:40.878455 containerd[1548]: time="2026-01-23T19:30:40.878355249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 19:30:40.880180 kubelet[2837]: E0123 19:30:40.879793 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:30:40.880180 kubelet[2837]: E0123 19:30:40.879861 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:30:40.880180 kubelet[2837]: E0123 19:30:40.880051 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvkh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pspcd_calico-system(7cbe68df-cea7-49bc-bbd7-253343631e45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:40.881480 kubelet[2837]: E0123 19:30:40.881394 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:30:40.883925 containerd[1548]: time="2026-01-23T19:30:40.883840773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r9djg,Uid:92099955-c310-4dc6-a23c-2c8c618bc3b8,Namespace:calico-system,Attempt:0,}" Jan 23 19:30:41.193945 systemd-networkd[1457]: cali2b482d231b1: Link UP Jan 23 19:30:41.194212 systemd-networkd[1457]: cali2b482d231b1: Gained carrier Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:40.976 [INFO][5176] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--r9djg-eth0 goldmane-666569f655- calico-system 92099955-c310-4dc6-a23c-2c8c618bc3b8 996 0 2026-01-23 19:29:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-r9djg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2b482d231b1 [] [] }} ContainerID="bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" Namespace="calico-system" Pod="goldmane-666569f655-r9djg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r9djg-" Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:40.976 [INFO][5176] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" Namespace="calico-system" Pod="goldmane-666569f655-r9djg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r9djg-eth0" Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.052 [INFO][5193] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" HandleID="k8s-pod-network.bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" Workload="localhost-k8s-goldmane--666569f655--r9djg-eth0" Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.053 [INFO][5193] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" HandleID="k8s-pod-network.bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" Workload="localhost-k8s-goldmane--666569f655--r9djg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139770), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-r9djg", "timestamp":"2026-01-23 19:30:41.052520303 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.053 [INFO][5193] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.053 [INFO][5193] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.053 [INFO][5193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.072 [INFO][5193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" host="localhost" Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.098 [INFO][5193] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.119 [INFO][5193] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.124 [INFO][5193] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.134 [INFO][5193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.134 [INFO][5193] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" host="localhost" Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.139 [INFO][5193] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8 Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.154 [INFO][5193] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" host="localhost" Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.171 [INFO][5193] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" host="localhost" Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.171 [INFO][5193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" host="localhost" Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.171 [INFO][5193] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:30:41.254324 containerd[1548]: 2026-01-23 19:30:41.171 [INFO][5193] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" HandleID="k8s-pod-network.bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" Workload="localhost-k8s-goldmane--666569f655--r9djg-eth0" Jan 23 19:30:41.256179 containerd[1548]: 2026-01-23 19:30:41.179 [INFO][5176] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" Namespace="calico-system" Pod="goldmane-666569f655-r9djg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r9djg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--r9djg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"92099955-c310-4dc6-a23c-2c8c618bc3b8", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 29, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-r9djg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2b482d231b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:41.256179 containerd[1548]: 2026-01-23 19:30:41.180 [INFO][5176] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" Namespace="calico-system" Pod="goldmane-666569f655-r9djg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r9djg-eth0" Jan 23 19:30:41.256179 containerd[1548]: 2026-01-23 19:30:41.180 [INFO][5176] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b482d231b1 ContainerID="bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" Namespace="calico-system" Pod="goldmane-666569f655-r9djg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r9djg-eth0" Jan 23 19:30:41.256179 containerd[1548]: 2026-01-23 19:30:41.199 [INFO][5176] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" Namespace="calico-system" Pod="goldmane-666569f655-r9djg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r9djg-eth0" Jan 23 19:30:41.256179 containerd[1548]: 2026-01-23 19:30:41.201 [INFO][5176] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" Namespace="calico-system" Pod="goldmane-666569f655-r9djg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r9djg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--r9djg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"92099955-c310-4dc6-a23c-2c8c618bc3b8", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 29, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8", Pod:"goldmane-666569f655-r9djg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2b482d231b1", MAC:"ae:4b:5c:3b:09:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:30:41.256179 containerd[1548]: 2026-01-23 19:30:41.241 [INFO][5176] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" Namespace="calico-system" Pod="goldmane-666569f655-r9djg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r9djg-eth0" Jan 23 19:30:41.358337 containerd[1548]: time="2026-01-23T19:30:41.358192441Z" level=info msg="connecting to shim bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8" address="unix:///run/containerd/s/eb9d6c980edc2a6fba321319792a16dfd143c4e1d96af1662aeb653684fb3125" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:30:41.479160 kubelet[2837]: E0123 19:30:41.479060 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:30:41.480529 kubelet[2837]: E0123 19:30:41.480493 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:30:41.487672 systemd[1]: Started cri-containerd-bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8.scope - libcontainer container bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8. Jan 23 19:30:41.531907 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:30:41.645171 containerd[1548]: time="2026-01-23T19:30:41.645095755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r9djg,Uid:92099955-c310-4dc6-a23c-2c8c618bc3b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"bab081b477ccfc6cbf68ac650abd4e1553f58a2a32586b75f71ef48af6ceb6f8\"" Jan 23 19:30:41.649131 containerd[1548]: time="2026-01-23T19:30:41.648470365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 19:30:41.731040 containerd[1548]: time="2026-01-23T19:30:41.730671949Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:41.739169 containerd[1548]: time="2026-01-23T19:30:41.738993299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 19:30:41.739169 containerd[1548]: time="2026-01-23T19:30:41.739090880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 19:30:41.739667 kubelet[2837]: E0123 19:30:41.739487 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:30:41.740669 kubelet[2837]: E0123 19:30:41.739702 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:30:41.740669 kubelet[2837]: E0123 19:30:41.739882 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdtmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r9djg_calico-system(92099955-c310-4dc6-a23c-2c8c618bc3b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:41.741921 kubelet[2837]: E0123 19:30:41.741832 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:30:41.821438 systemd-networkd[1457]: cali14ab4894580: Gained IPv6LL Jan 23 19:30:42.012650 systemd-networkd[1457]: cali94323df8abd: Gained IPv6LL Jan 23 19:30:42.499722 kubelet[2837]: E0123 19:30:42.498785 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:30:42.499722 kubelet[2837]: E0123 19:30:42.498990 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:30:42.508032 kubelet[2837]: E0123 19:30:42.507687 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:30:43.036891 systemd-networkd[1457]: cali2b482d231b1: Gained IPv6LL Jan 23 19:30:43.496786 kubelet[2837]: E0123 19:30:43.496573 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:30:48.886341 containerd[1548]: time="2026-01-23T19:30:48.885932505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 19:30:49.037322 containerd[1548]: time="2026-01-23T19:30:49.036815042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:49.039896 containerd[1548]: time="2026-01-23T19:30:49.039850250Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 19:30:49.041718 containerd[1548]: time="2026-01-23T19:30:49.040060491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 19:30:49.043308 kubelet[2837]: E0123 19:30:49.043141 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:30:49.044817 kubelet[2837]: E0123 19:30:49.043253 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:30:49.049887 kubelet[2837]: E0123 19:30:49.049753 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7a483d5077ea476891ae22814fc1300c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v55kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77958bf869-kjvtc_calico-system(93f1f488-26e4-4256-ae3b-355d056de5e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:49.053363 containerd[1548]: time="2026-01-23T19:30:49.053233405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 19:30:49.141911 containerd[1548]: time="2026-01-23T19:30:49.140920347Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:49.145922 containerd[1548]: time="2026-01-23T19:30:49.145740390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 19:30:49.145922 containerd[1548]: time="2026-01-23T19:30:49.145874651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 19:30:49.146466 kubelet[2837]: E0123 19:30:49.146242 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:30:49.146466 kubelet[2837]: E0123 19:30:49.146439 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:30:49.146990 kubelet[2837]: E0123 19:30:49.146920 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v55kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77958bf869-kjvtc_calico-system(93f1f488-26e4-4256-ae3b-355d056de5e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:49.149564 kubelet[2837]: E0123 19:30:49.149515 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:30:50.889078 containerd[1548]: time="2026-01-23T19:30:50.889025884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 19:30:51.058783 containerd[1548]: time="2026-01-23T19:30:51.058380402Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:51.088121 containerd[1548]: time="2026-01-23T19:30:51.087920787Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 19:30:51.088121 containerd[1548]: time="2026-01-23T19:30:51.087970757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 19:30:51.090441 kubelet[2837]: E0123 19:30:51.089124 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:30:51.090441 kubelet[2837]: E0123 19:30:51.089331 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:30:51.090441 kubelet[2837]: E0123 19:30:51.089529 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67877fc7f5-bsvtq_calico-system(ae1ba4f6-1230-4757-8b1a-af9cfe7ac401): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:51.093854 kubelet[2837]: E0123 19:30:51.091499 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:30:53.900172 containerd[1548]: time="2026-01-23T19:30:53.899532659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 19:30:53.983361 containerd[1548]: time="2026-01-23T19:30:53.982741467Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:53.991782 containerd[1548]: time="2026-01-23T19:30:53.991493428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 19:30:53.991782 containerd[1548]: time="2026-01-23T19:30:53.991663005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 19:30:53.992072 kubelet[2837]: E0123 19:30:53.991946 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:30:53.994760 kubelet[2837]: E0123 19:30:53.992768 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:30:53.994760 kubelet[2837]: E0123 19:30:53.993141 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvkh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pspcd_calico-system(7cbe68df-cea7-49bc-bbd7-253343631e45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:53.995161 containerd[1548]: time="2026-01-23T19:30:53.994585364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 19:30:54.090424 containerd[1548]: time="2026-01-23T19:30:54.090104696Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:54.097228 containerd[1548]: time="2026-01-23T19:30:54.097089897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 19:30:54.097438 containerd[1548]: time="2026-01-23T19:30:54.097256508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 19:30:54.098712 containerd[1548]: time="2026-01-23T19:30:54.097998702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 19:30:54.099016 kubelet[2837]: E0123 19:30:54.097559 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:30:54.099016 kubelet[2837]: E0123 19:30:54.097660 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:30:54.099016 kubelet[2837]: E0123 19:30:54.098563 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdtmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r9djg_calico-system(92099955-c310-4dc6-a23c-2c8c618bc3b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:54.100975 kubelet[2837]: E0123 19:30:54.100786 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:30:54.229759 containerd[1548]: time="2026-01-23T19:30:54.228530535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:54.230998 containerd[1548]: time="2026-01-23T19:30:54.230871319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 19:30:54.230998 containerd[1548]: time="2026-01-23T19:30:54.230949244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 19:30:54.231502 kubelet[2837]: E0123 19:30:54.231413 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:30:54.231588 kubelet[2837]: E0123 19:30:54.231498 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:30:54.231796 kubelet[2837]: E0123 19:30:54.231699 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvkh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pspcd_calico-system(7cbe68df-cea7-49bc-bbd7-253343631e45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:54.234326 kubelet[2837]: E0123 19:30:54.234195 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:30:54.894977 containerd[1548]: time="2026-01-23T19:30:54.893981530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:30:55.036066 containerd[1548]: time="2026-01-23T19:30:55.034580524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:55.044151 containerd[1548]: time="2026-01-23T19:30:55.043945780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:30:55.044429 containerd[1548]: time="2026-01-23T19:30:55.044381102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:30:55.046502 kubelet[2837]: E0123 19:30:55.044753 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:30:55.046502 kubelet[2837]: E0123 19:30:55.044818 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:30:55.046502 kubelet[2837]: E0123 19:30:55.044992 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqpf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545cbc66db-fpfjb_calico-apiserver(87c1e199-aab6-487a-be60-3401d4797307): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:55.088975 kubelet[2837]: E0123 19:30:55.079740 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:30:57.001648 kubelet[2837]: E0123 19:30:57.000168 2837 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.117s" Jan 23 19:30:57.072879 kubelet[2837]: E0123 19:30:57.067172 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:57.088999 containerd[1548]: time="2026-01-23T19:30:57.080553434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:30:57.461002 containerd[1548]: time="2026-01-23T19:30:57.433772773Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:30:57.539721 containerd[1548]: time="2026-01-23T19:30:57.502823672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:30:57.539721 containerd[1548]: time="2026-01-23T19:30:57.503450461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:30:59.378354 kubelet[2837]: E0123 19:30:59.334198 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:30:59.378354 kubelet[2837]: E0123 19:30:59.335020 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:30:59.378354 kubelet[2837]: E0123 19:30:59.351384 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljmkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545cbc66db-s4jf2_calico-apiserver(87004552-13b2-409e-9fda-f933cdb145c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:30:59.482062 kubelet[2837]: E0123 19:30:59.409506 2837 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.46s" Jan 23 19:30:59.482506 kubelet[2837]: E0123 19:30:59.458388 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:30:59.502111 kubelet[2837]: E0123 19:30:59.502057 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:31:01.535337 kubelet[2837]: E0123 19:31:01.534103 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:31:01.897712 kubelet[2837]: E0123 19:31:01.897065 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:31:04.897841 kubelet[2837]: E0123 19:31:04.897762 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:31:08.741912 kubelet[2837]: E0123 19:31:08.741084 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:31:10.101210 kubelet[2837]: E0123 19:31:10.101104 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:31:13.887584 kubelet[2837]: E0123 19:31:13.887460 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:31:13.899120 containerd[1548]: time="2026-01-23T19:31:13.890325239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 19:31:13.975390 containerd[1548]: time="2026-01-23T19:31:13.975155387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:31:13.981591 containerd[1548]: time="2026-01-23T19:31:13.981484113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 19:31:13.981758 containerd[1548]: time="2026-01-23T19:31:13.981631849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 19:31:13.983190 kubelet[2837]: E0123 19:31:13.982546 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:31:13.983190 kubelet[2837]: E0123 19:31:13.982648 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:31:13.983658 kubelet[2837]: E0123 19:31:13.983248 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7a483d5077ea476891ae22814fc1300c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v55kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77958bf869-kjvtc_calico-system(93f1f488-26e4-4256-ae3b-355d056de5e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 19:31:13.993501 containerd[1548]: time="2026-01-23T19:31:13.992833875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 19:31:14.153192 containerd[1548]: time="2026-01-23T19:31:14.151780689Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:31:14.174019 containerd[1548]: time="2026-01-23T19:31:14.167219708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 19:31:14.183120 containerd[1548]: time="2026-01-23T19:31:14.183014056Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 19:31:14.186005 kubelet[2837]: E0123 19:31:14.183918 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:31:14.186005 kubelet[2837]: E0123 19:31:14.183982 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:31:14.186005 kubelet[2837]: E0123 19:31:14.184207 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v55kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77958bf869-kjvtc_calico-system(93f1f488-26e4-4256-ae3b-355d056de5e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 19:31:14.186005 kubelet[2837]: E0123 19:31:14.185662 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:31:15.899008 containerd[1548]: time="2026-01-23T19:31:15.897375681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 19:31:16.022230 containerd[1548]: time="2026-01-23T19:31:16.022124968Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:31:16.041327 containerd[1548]: time="2026-01-23T19:31:16.041035691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 19:31:16.041327 containerd[1548]: time="2026-01-23T19:31:16.041197613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 19:31:16.042761 kubelet[2837]: E0123 19:31:16.042599 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:31:16.042761 kubelet[2837]: E0123 19:31:16.042695 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:31:16.044188 kubelet[2837]: E0123 19:31:16.042947 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67877fc7f5-bsvtq_calico-system(ae1ba4f6-1230-4757-8b1a-af9cfe7ac401): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 19:31:16.045985 kubelet[2837]: E0123 19:31:16.044903 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:31:17.896489 containerd[1548]: time="2026-01-23T19:31:17.894536792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 19:31:18.049910 containerd[1548]: time="2026-01-23T19:31:18.042645891Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:31:18.061514 containerd[1548]: time="2026-01-23T19:31:18.061140403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 19:31:18.062766 containerd[1548]: time="2026-01-23T19:31:18.061502160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 19:31:18.064459 kubelet[2837]: E0123 19:31:18.063997 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:31:18.087207 kubelet[2837]: E0123 19:31:18.064470 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:31:18.087207 kubelet[2837]: E0123 19:31:18.065123 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvkh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pspcd_calico-system(7cbe68df-cea7-49bc-bbd7-253343631e45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 19:31:18.094457 containerd[1548]: time="2026-01-23T19:31:18.068039243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 19:31:18.198883 containerd[1548]: time="2026-01-23T19:31:18.193536527Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:31:18.200247 containerd[1548]: time="2026-01-23T19:31:18.200197058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 19:31:18.200595 containerd[1548]: time="2026-01-23T19:31:18.200366018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 19:31:18.203205 kubelet[2837]: E0123 19:31:18.200820 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:31:18.203205 kubelet[2837]: E0123 19:31:18.200953 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:31:18.203205 kubelet[2837]: E0123 19:31:18.201168 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvkh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pspcd_calico-system(7cbe68df-cea7-49bc-bbd7-253343631e45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 19:31:18.203205 kubelet[2837]: E0123 19:31:18.203059 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:31:21.894614 containerd[1548]: time="2026-01-23T19:31:21.894529602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 19:31:22.001807 containerd[1548]: time="2026-01-23T19:31:21.999926622Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:31:22.008614 containerd[1548]: time="2026-01-23T19:31:22.008253614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 19:31:22.008614 containerd[1548]: time="2026-01-23T19:31:22.008403604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 19:31:22.010620 kubelet[2837]: E0123 19:31:22.010517 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:31:22.011502 kubelet[2837]: E0123 19:31:22.010626 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:31:22.011502 kubelet[2837]: E0123 19:31:22.010981 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdtmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r9djg_calico-system(92099955-c310-4dc6-a23c-2c8c618bc3b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 19:31:22.015310 kubelet[2837]: E0123 19:31:22.014151 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:31:22.017924 containerd[1548]: time="2026-01-23T19:31:22.016110991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:31:22.120164 containerd[1548]: time="2026-01-23T19:31:22.119928608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:31:22.131534 containerd[1548]: time="2026-01-23T19:31:22.131219414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:31:22.131534 containerd[1548]: time="2026-01-23T19:31:22.131429695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:31:22.133203 kubelet[2837]: E0123 19:31:22.132597 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:31:22.133203 kubelet[2837]: E0123 19:31:22.132673 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:31:22.133203 kubelet[2837]: E0123 19:31:22.132895 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqpf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545cbc66db-fpfjb_calico-apiserver(87c1e199-aab6-487a-be60-3401d4797307): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:31:22.134387 kubelet[2837]: E0123 19:31:22.134242 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:31:22.886340 kubelet[2837]: E0123 19:31:22.886140 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:31:25.906374 kubelet[2837]: E0123 19:31:25.904975 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:31:27.899197 containerd[1548]: time="2026-01-23T19:31:27.898689310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:31:28.037072 containerd[1548]: time="2026-01-23T19:31:28.036383041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:31:28.041736 containerd[1548]: time="2026-01-23T19:31:28.041563378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:31:28.041736 containerd[1548]: time="2026-01-23T19:31:28.041637248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:31:28.042117 kubelet[2837]: E0123 19:31:28.042000 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:31:28.042117 kubelet[2837]: E0123 19:31:28.042105 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:31:28.042768 kubelet[2837]: E0123 19:31:28.042493 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljmkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545cbc66db-s4jf2_calico-apiserver(87004552-13b2-409e-9fda-f933cdb145c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:31:28.044529 kubelet[2837]: E0123 19:31:28.044388 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:31:28.886201 kubelet[2837]: E0123 19:31:28.884576 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:31:29.890981 kubelet[2837]: E0123 19:31:29.890703 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:31:30.149694 systemd[1]: Started sshd@7-10.0.0.128:22-10.0.0.1:37872.service - OpenSSH per-connection server daemon (10.0.0.1:37872). Jan 23 19:31:30.395718 sshd[5356]: Accepted publickey for core from 10.0.0.1 port 37872 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:31:30.401155 sshd-session[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:30.426997 systemd-logind[1534]: New session 8 of user core. Jan 23 19:31:30.444520 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 19:31:30.856360 sshd[5359]: Connection closed by 10.0.0.1 port 37872 Jan 23 19:31:30.860118 sshd-session[5356]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:30.919518 systemd[1]: sshd@7-10.0.0.128:22-10.0.0.1:37872.service: Deactivated successfully. Jan 23 19:31:30.934126 kubelet[2837]: E0123 19:31:30.922216 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:31:30.939955 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 19:31:30.952717 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. Jan 23 19:31:30.957919 systemd-logind[1534]: Removed session 8. Jan 23 19:31:32.903618 kubelet[2837]: E0123 19:31:32.897076 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:31:35.923598 systemd[1]: Started sshd@8-10.0.0.128:22-10.0.0.1:49108.service - OpenSSH per-connection server daemon (10.0.0.1:49108). Jan 23 19:31:35.932550 kubelet[2837]: E0123 19:31:35.932458 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:31:36.070736 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 49108 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:31:36.074110 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:36.142190 systemd-logind[1534]: New session 9 of user core. Jan 23 19:31:36.156502 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 19:31:36.613183 sshd[5379]: Connection closed by 10.0.0.1 port 49108 Jan 23 19:31:36.613649 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:36.629835 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. Jan 23 19:31:36.631940 systemd[1]: sshd@8-10.0.0.128:22-10.0.0.1:49108.service: Deactivated successfully. Jan 23 19:31:36.636134 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 19:31:36.645414 systemd-logind[1534]: Removed session 9. Jan 23 19:31:38.884959 kubelet[2837]: E0123 19:31:38.884457 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:31:39.902899 kubelet[2837]: E0123 19:31:39.902505 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:31:40.893935 kubelet[2837]: E0123 19:31:40.893759 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:31:41.640178 systemd[1]: Started sshd@9-10.0.0.128:22-10.0.0.1:49122.service - OpenSSH per-connection server daemon (10.0.0.1:49122). Jan 23 19:31:41.728535 sshd[5396]: Accepted publickey for core from 10.0.0.1 port 49122 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:31:41.732567 sshd-session[5396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:41.742585 systemd-logind[1534]: New session 10 of user core. Jan 23 19:31:41.755757 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 19:31:41.901419 kubelet[2837]: E0123 19:31:41.900153 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:31:42.067940 sshd[5399]: Connection closed by 10.0.0.1 port 49122 Jan 23 19:31:42.068925 sshd-session[5396]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:42.085817 systemd[1]: sshd@9-10.0.0.128:22-10.0.0.1:49122.service: Deactivated successfully. Jan 23 19:31:42.093180 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 19:31:42.096167 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. Jan 23 19:31:42.103744 systemd-logind[1534]: Removed session 10. Jan 23 19:31:42.884079 kubelet[2837]: E0123 19:31:42.883994 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:31:45.891039 kubelet[2837]: E0123 19:31:45.890735 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:31:46.890082 kubelet[2837]: E0123 19:31:46.889541 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:31:47.094139 systemd[1]: Started sshd@10-10.0.0.128:22-10.0.0.1:33236.service - OpenSSH per-connection server daemon (10.0.0.1:33236). Jan 23 19:31:47.368019 sshd[5420]: Accepted publickey for core from 10.0.0.1 port 33236 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:31:47.370756 sshd-session[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:47.379190 systemd-logind[1534]: New session 11 of user core. Jan 23 19:31:47.395718 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 19:31:47.759679 sshd[5423]: Connection closed by 10.0.0.1 port 33236 Jan 23 19:31:47.760767 sshd-session[5420]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:47.770048 systemd[1]: sshd@10-10.0.0.128:22-10.0.0.1:33236.service: Deactivated successfully. Jan 23 19:31:47.801088 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 19:31:47.818491 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. Jan 23 19:31:47.821153 systemd-logind[1534]: Removed session 11. Jan 23 19:31:47.887816 kubelet[2837]: E0123 19:31:47.887724 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:31:51.888596 kubelet[2837]: E0123 19:31:51.888440 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:31:52.787574 systemd[1]: Started sshd@11-10.0.0.128:22-10.0.0.1:33250.service - OpenSSH per-connection server daemon (10.0.0.1:33250). Jan 23 19:31:52.888559 sshd[5444]: Accepted publickey for core from 10.0.0.1 port 33250 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:31:52.892461 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:52.913240 systemd-logind[1534]: New session 12 of user core. Jan 23 19:31:52.927623 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 19:31:53.380349 sshd[5447]: Connection closed by 10.0.0.1 port 33250 Jan 23 19:31:53.382109 sshd-session[5444]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:53.399540 systemd[1]: sshd@11-10.0.0.128:22-10.0.0.1:33250.service: Deactivated successfully. Jan 23 19:31:53.405631 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 19:31:53.408191 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. Jan 23 19:31:53.412550 systemd-logind[1534]: Removed session 12. Jan 23 19:31:53.890257 kubelet[2837]: E0123 19:31:53.890201 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:31:53.900041 kubelet[2837]: E0123 19:31:53.898812 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:31:54.888858 kubelet[2837]: E0123 19:31:54.888798 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:31:58.456671 systemd[1]: Started sshd@12-10.0.0.128:22-10.0.0.1:60480.service - OpenSSH per-connection server daemon (10.0.0.1:60480). Jan 23 19:31:58.614100 sshd[5463]: Accepted publickey for core from 10.0.0.1 port 60480 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:31:58.616162 sshd-session[5463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:58.638620 systemd-logind[1534]: New session 13 of user core. Jan 23 19:31:58.662221 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 19:31:58.887344 kubelet[2837]: E0123 19:31:58.886813 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:31:58.958701 sshd[5466]: Connection closed by 10.0.0.1 port 60480 Jan 23 19:31:58.961020 sshd-session[5463]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:58.977425 systemd[1]: sshd@12-10.0.0.128:22-10.0.0.1:60480.service: Deactivated successfully. Jan 23 19:31:58.984167 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 19:31:58.996215 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. Jan 23 19:31:59.003840 systemd-logind[1534]: Removed session 13. Jan 23 19:31:59.886892 containerd[1548]: time="2026-01-23T19:31:59.886796015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 19:32:00.026144 containerd[1548]: time="2026-01-23T19:32:00.025884744Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:32:00.031939 containerd[1548]: time="2026-01-23T19:32:00.029404826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 19:32:00.031939 containerd[1548]: time="2026-01-23T19:32:00.029522626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 19:32:00.032161 kubelet[2837]: E0123 19:32:00.029846 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:32:00.032161 kubelet[2837]: E0123 19:32:00.029918 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:32:00.032161 kubelet[2837]: E0123 19:32:00.030231 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67877fc7f5-bsvtq_calico-system(ae1ba4f6-1230-4757-8b1a-af9cfe7ac401): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 19:32:00.033903 kubelet[2837]: E0123 19:32:00.033152 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:32:01.887338 kubelet[2837]: E0123 19:32:01.885530 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:32:03.987065 systemd[1]: Started sshd@13-10.0.0.128:22-10.0.0.1:60492.service - OpenSSH per-connection server daemon (10.0.0.1:60492). Jan 23 19:32:04.187674 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 60492 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:32:04.195698 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:04.216594 systemd-logind[1534]: New session 14 of user core. Jan 23 19:32:04.231642 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 19:32:04.564629 sshd[5526]: Connection closed by 10.0.0.1 port 60492 Jan 23 19:32:04.566543 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:04.598584 systemd[1]: sshd@13-10.0.0.128:22-10.0.0.1:60492.service: Deactivated successfully. Jan 23 19:32:04.625192 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 19:32:04.629832 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. Jan 23 19:32:04.631984 systemd-logind[1534]: Removed session 14. Jan 23 19:32:04.888006 kubelet[2837]: E0123 19:32:04.885576 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:05.889366 kubelet[2837]: E0123 19:32:05.887325 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:32:06.942783 containerd[1548]: time="2026-01-23T19:32:06.937861595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 19:32:07.068649 containerd[1548]: time="2026-01-23T19:32:07.068517037Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:32:07.075853 containerd[1548]: time="2026-01-23T19:32:07.075694786Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 19:32:07.075853 containerd[1548]: time="2026-01-23T19:32:07.075823787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 19:32:07.076245 kubelet[2837]: E0123 19:32:07.076154 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:32:07.076245 kubelet[2837]: E0123 19:32:07.076237 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:32:07.079088 kubelet[2837]: E0123 19:32:07.078928 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7a483d5077ea476891ae22814fc1300c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v55kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77958bf869-kjvtc_calico-system(93f1f488-26e4-4256-ae3b-355d056de5e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 19:32:07.080344 containerd[1548]: time="2026-01-23T19:32:07.079857591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 19:32:07.177817 containerd[1548]: time="2026-01-23T19:32:07.177708327Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:32:07.181760 containerd[1548]: time="2026-01-23T19:32:07.181609088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 19:32:07.181760 containerd[1548]: time="2026-01-23T19:32:07.181754199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 19:32:07.182626 kubelet[2837]: E0123 19:32:07.182518 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:32:07.182626 kubelet[2837]: E0123 19:32:07.182615 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:32:07.183117 kubelet[2837]: E0123 19:32:07.182874 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvkh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pspcd_calico-system(7cbe68df-cea7-49bc-bbd7-253343631e45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 19:32:07.187873 containerd[1548]: time="2026-01-23T19:32:07.185739538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 19:32:07.324465 containerd[1548]: time="2026-01-23T19:32:07.323814098Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:32:07.336889 containerd[1548]: time="2026-01-23T19:32:07.336699407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 19:32:07.341255 containerd[1548]: time="2026-01-23T19:32:07.340810856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 19:32:07.341418 kubelet[2837]: E0123 19:32:07.338203 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:32:07.341418 kubelet[2837]: E0123 19:32:07.338344 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:32:07.341418 kubelet[2837]: E0123 19:32:07.339177 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v55kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77958bf869-kjvtc_calico-system(93f1f488-26e4-4256-ae3b-355d056de5e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 19:32:07.341794 kubelet[2837]: E0123 19:32:07.341738 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:32:07.346443 containerd[1548]: time="2026-01-23T19:32:07.342014219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 19:32:07.442692 containerd[1548]: time="2026-01-23T19:32:07.442387605Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:32:07.449226 containerd[1548]: time="2026-01-23T19:32:07.447456475Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 19:32:07.449226 containerd[1548]: time="2026-01-23T19:32:07.447838517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 19:32:07.449490 kubelet[2837]: E0123 19:32:07.448803 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:32:07.451126 kubelet[2837]: E0123 19:32:07.450780 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:32:07.453196 kubelet[2837]: E0123 19:32:07.451593 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvkh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pspcd_calico-system(7cbe68df-cea7-49bc-bbd7-253343631e45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 19:32:07.457198 kubelet[2837]: E0123 19:32:07.456151 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:32:09.625112 systemd[1]: Started sshd@14-10.0.0.128:22-10.0.0.1:41440.service - OpenSSH per-connection server daemon (10.0.0.1:41440). Jan 23 19:32:09.779660 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 41440 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:32:09.785205 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:10.271225 systemd-logind[1534]: New session 15 of user core. Jan 23 19:32:10.324711 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 19:32:10.924958 kubelet[2837]: E0123 19:32:10.924888 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:32:11.249851 sshd[5552]: Connection closed by 10.0.0.1 port 41440 Jan 23 19:32:11.260436 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:11.283034 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. Jan 23 19:32:11.283996 systemd[1]: sshd@14-10.0.0.128:22-10.0.0.1:41440.service: Deactivated successfully. Jan 23 19:32:11.289741 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 19:32:11.301573 systemd-logind[1534]: Removed session 15. Jan 23 19:32:11.906405 containerd[1548]: time="2026-01-23T19:32:11.902912526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:32:12.028390 containerd[1548]: time="2026-01-23T19:32:12.027008700Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:32:12.036688 containerd[1548]: time="2026-01-23T19:32:12.036486887Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:32:12.036688 containerd[1548]: time="2026-01-23T19:32:12.036613203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:32:12.036918 kubelet[2837]: E0123 19:32:12.036805 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:32:12.038210 kubelet[2837]: E0123 19:32:12.036962 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:32:12.038210 kubelet[2837]: E0123 19:32:12.037196 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqpf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545cbc66db-fpfjb_calico-apiserver(87c1e199-aab6-487a-be60-3401d4797307): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:32:12.041037 kubelet[2837]: E0123 19:32:12.040890 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:32:13.901965 containerd[1548]: time="2026-01-23T19:32:13.900663132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 19:32:14.018453 containerd[1548]: time="2026-01-23T19:32:14.018402267Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:32:14.020839 containerd[1548]: time="2026-01-23T19:32:14.020794876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 19:32:14.021033 containerd[1548]: time="2026-01-23T19:32:14.021012252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 19:32:14.028323 kubelet[2837]: E0123 19:32:14.022241 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:32:14.028323 kubelet[2837]: E0123 19:32:14.022541 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:32:14.028323 kubelet[2837]: E0123 19:32:14.022793 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdtmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r9djg_calico-system(92099955-c310-4dc6-a23c-2c8c618bc3b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 19:32:14.028323 kubelet[2837]: E0123 19:32:14.028190 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:32:16.282649 systemd[1]: Started sshd@15-10.0.0.128:22-10.0.0.1:36258.service - OpenSSH per-connection server daemon (10.0.0.1:36258). Jan 23 19:32:16.410034 sshd[5571]: Accepted publickey for core from 10.0.0.1 port 36258 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:32:16.415501 sshd-session[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:16.443203 systemd-logind[1534]: New session 16 of user core. Jan 23 19:32:16.454503 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 19:32:16.846464 sshd[5574]: Connection closed by 10.0.0.1 port 36258 Jan 23 19:32:16.848568 sshd-session[5571]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:16.859718 systemd[1]: sshd@15-10.0.0.128:22-10.0.0.1:36258.service: Deactivated successfully. Jan 23 19:32:16.866881 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 19:32:16.872839 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. Jan 23 19:32:16.875891 systemd-logind[1534]: Removed session 16. Jan 23 19:32:16.884626 kubelet[2837]: E0123 19:32:16.883592 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:18.884342 kubelet[2837]: E0123 19:32:18.883641 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:19.901470 containerd[1548]: time="2026-01-23T19:32:19.901427987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:32:19.917056 kubelet[2837]: E0123 19:32:19.904438 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:32:20.001686 containerd[1548]: time="2026-01-23T19:32:20.001445750Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:32:20.003964 containerd[1548]: time="2026-01-23T19:32:20.003839462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:32:20.004094 containerd[1548]: time="2026-01-23T19:32:20.003987258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:32:20.014299 kubelet[2837]: E0123 19:32:20.004222 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:32:20.014299 kubelet[2837]: E0123 19:32:20.004377 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:32:20.014299 kubelet[2837]: E0123 19:32:20.004561 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljmkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545cbc66db-s4jf2_calico-apiserver(87004552-13b2-409e-9fda-f933cdb145c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:32:20.017727 kubelet[2837]: E0123 19:32:20.017376 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:32:21.875691 systemd[1]: Started sshd@16-10.0.0.128:22-10.0.0.1:36270.service - OpenSSH per-connection server daemon (10.0.0.1:36270). Jan 23 19:32:21.901494 kubelet[2837]: E0123 19:32:21.901064 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:32:22.038220 sshd[5589]: Accepted publickey for core from 10.0.0.1 port 36270 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:32:22.040228 sshd-session[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:22.047918 systemd-logind[1534]: New session 17 of user core. Jan 23 19:32:22.054169 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 19:32:22.264979 sshd[5592]: Connection closed by 10.0.0.1 port 36270 Jan 23 19:32:22.266321 sshd-session[5589]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:22.296526 systemd[1]: sshd@16-10.0.0.128:22-10.0.0.1:36270.service: Deactivated successfully. Jan 23 19:32:22.306653 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 19:32:22.314395 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. Jan 23 19:32:22.318030 systemd-logind[1534]: Removed session 17. Jan 23 19:32:23.891221 kubelet[2837]: E0123 19:32:23.890651 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:32:25.887489 kubelet[2837]: E0123 19:32:25.887331 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:32:26.890792 kubelet[2837]: E0123 19:32:26.890678 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:32:27.303734 systemd[1]: Started sshd@17-10.0.0.128:22-10.0.0.1:49612.service - OpenSSH per-connection server daemon (10.0.0.1:49612). Jan 23 19:32:27.413100 sshd[5606]: Accepted publickey for core from 10.0.0.1 port 49612 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:32:27.418054 sshd-session[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:27.429970 systemd-logind[1534]: New session 18 of user core. Jan 23 19:32:27.444919 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 19:32:27.707425 sshd[5609]: Connection closed by 10.0.0.1 port 49612 Jan 23 19:32:27.705509 sshd-session[5606]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:27.736125 systemd[1]: sshd@17-10.0.0.128:22-10.0.0.1:49612.service: Deactivated successfully. Jan 23 19:32:27.742685 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 19:32:27.747818 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. Jan 23 19:32:27.748847 systemd[1]: Started sshd@18-10.0.0.128:22-10.0.0.1:49620.service - OpenSSH per-connection server daemon (10.0.0.1:49620). Jan 23 19:32:27.758388 systemd-logind[1534]: Removed session 18. Jan 23 19:32:27.841909 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 49620 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:32:27.848859 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:27.862226 systemd-logind[1534]: New session 19 of user core. Jan 23 19:32:27.885337 kubelet[2837]: E0123 19:32:27.884974 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:27.887011 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 19:32:28.288567 sshd[5627]: Connection closed by 10.0.0.1 port 49620 Jan 23 19:32:28.290034 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:28.325408 systemd[1]: sshd@18-10.0.0.128:22-10.0.0.1:49620.service: Deactivated successfully. Jan 23 19:32:28.333941 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 19:32:28.340397 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. Jan 23 19:32:28.349676 systemd[1]: Started sshd@19-10.0.0.128:22-10.0.0.1:49622.service - OpenSSH per-connection server daemon (10.0.0.1:49622). Jan 23 19:32:28.353790 systemd-logind[1534]: Removed session 19. Jan 23 19:32:28.547606 sshd[5638]: Accepted publickey for core from 10.0.0.1 port 49622 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:32:28.549455 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:28.580098 systemd-logind[1534]: New session 20 of user core. Jan 23 19:32:28.586408 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 19:32:28.851599 sshd[5641]: Connection closed by 10.0.0.1 port 49622 Jan 23 19:32:28.852597 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:28.865705 systemd[1]: sshd@19-10.0.0.128:22-10.0.0.1:49622.service: Deactivated successfully. Jan 23 19:32:28.876009 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 19:32:28.900453 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. Jan 23 19:32:28.908361 systemd-logind[1534]: Removed session 20. Jan 23 19:32:30.884175 kubelet[2837]: E0123 19:32:30.883935 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:31.890337 kubelet[2837]: E0123 19:32:31.887470 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:32:33.904557 kubelet[2837]: E0123 19:32:33.900133 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:32:33.942901 systemd[1]: Started sshd@20-10.0.0.128:22-10.0.0.1:49632.service - OpenSSH per-connection server daemon (10.0.0.1:49632). Jan 23 19:32:34.252359 sshd[5681]: Accepted publickey for core from 10.0.0.1 port 49632 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:32:34.276882 sshd-session[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:34.327849 systemd-logind[1534]: New session 21 of user core. Jan 23 19:32:34.369749 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 19:32:34.898336 sshd[5684]: Connection closed by 10.0.0.1 port 49632 Jan 23 19:32:34.892968 sshd-session[5681]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:34.931108 systemd[1]: sshd@20-10.0.0.128:22-10.0.0.1:49632.service: Deactivated successfully. Jan 23 19:32:34.939825 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 19:32:34.945394 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. Jan 23 19:32:34.948559 systemd-logind[1534]: Removed session 21. Jan 23 19:32:35.918630 kubelet[2837]: E0123 19:32:35.915089 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:32:36.896395 kubelet[2837]: E0123 19:32:36.891939 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:32:39.913781 systemd[1]: Started sshd@21-10.0.0.128:22-10.0.0.1:49364.service - OpenSSH per-connection server daemon (10.0.0.1:49364). Jan 23 19:32:40.007011 sshd[5700]: Accepted publickey for core from 10.0.0.1 port 49364 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:32:40.009150 sshd-session[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:40.020125 systemd-logind[1534]: New session 22 of user core. Jan 23 19:32:40.030507 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 19:32:40.255791 sshd[5703]: Connection closed by 10.0.0.1 port 49364 Jan 23 19:32:40.254621 sshd-session[5700]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:40.262558 systemd[1]: sshd@21-10.0.0.128:22-10.0.0.1:49364.service: Deactivated successfully. Jan 23 19:32:40.267750 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 19:32:40.274657 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. Jan 23 19:32:40.281791 systemd-logind[1534]: Removed session 22. Jan 23 19:32:40.889758 kubelet[2837]: E0123 19:32:40.888373 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:32:41.891003 kubelet[2837]: E0123 19:32:41.890893 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:32:42.883437 kubelet[2837]: E0123 19:32:42.883076 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:45.359512 systemd[1]: Started sshd@22-10.0.0.128:22-10.0.0.1:58580.service - OpenSSH per-connection server daemon (10.0.0.1:58580). Jan 23 19:32:45.598575 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 58580 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:32:45.614585 sshd-session[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:45.656373 systemd-logind[1534]: New session 23 of user core. Jan 23 19:32:45.701917 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 19:32:45.925512 kubelet[2837]: E0123 19:32:45.924931 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:32:45.926177 kubelet[2837]: E0123 19:32:45.925670 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:32:46.468184 sshd[5721]: Connection closed by 10.0.0.1 port 58580 Jan 23 19:32:46.471032 sshd-session[5718]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:46.488695 systemd-logind[1534]: Session 23 logged out. Waiting for processes to exit. Jan 23 19:32:46.490104 systemd[1]: sshd@22-10.0.0.128:22-10.0.0.1:58580.service: Deactivated successfully. Jan 23 19:32:46.507422 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 19:32:46.524173 systemd-logind[1534]: Removed session 23. Jan 23 19:32:46.909223 kubelet[2837]: E0123 19:32:46.909060 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:32:47.945089 kubelet[2837]: E0123 19:32:47.944180 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:32:51.547781 systemd[1]: Started sshd@23-10.0.0.128:22-10.0.0.1:58588.service - OpenSSH per-connection server daemon (10.0.0.1:58588). Jan 23 19:32:51.913024 sshd[5735]: Accepted publickey for core from 10.0.0.1 port 58588 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:32:51.922098 sshd-session[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:51.986653 systemd-logind[1534]: New session 24 of user core. Jan 23 19:32:52.010539 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 19:32:52.640729 sshd[5738]: Connection closed by 10.0.0.1 port 58588 Jan 23 19:32:52.649089 sshd-session[5735]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:52.677180 systemd[1]: sshd@23-10.0.0.128:22-10.0.0.1:58588.service: Deactivated successfully. Jan 23 19:32:52.690409 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 19:32:52.712066 systemd-logind[1534]: Session 24 logged out. Waiting for processes to exit. Jan 23 19:32:52.730148 systemd-logind[1534]: Removed session 24. Jan 23 19:32:54.904746 kubelet[2837]: E0123 19:32:54.903104 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:32:55.890160 kubelet[2837]: E0123 19:32:55.890067 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:32:57.670159 systemd[1]: Started sshd@24-10.0.0.128:22-10.0.0.1:48746.service - OpenSSH per-connection server daemon (10.0.0.1:48746). Jan 23 19:32:57.794588 sshd[5752]: Accepted publickey for core from 10.0.0.1 port 48746 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:32:57.801606 sshd-session[5752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:57.818524 systemd-logind[1534]: New session 25 of user core. Jan 23 19:32:57.843879 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 19:32:57.887334 kubelet[2837]: E0123 19:32:57.887115 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:32:57.895741 kubelet[2837]: E0123 19:32:57.895415 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:57.898809 kubelet[2837]: E0123 19:32:57.897720 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:32:58.141895 sshd[5755]: Connection closed by 10.0.0.1 port 48746 Jan 23 19:32:58.142484 sshd-session[5752]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:58.150894 systemd[1]: sshd@24-10.0.0.128:22-10.0.0.1:48746.service: Deactivated successfully. Jan 23 19:32:58.160175 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 19:32:58.161859 systemd-logind[1534]: Session 25 logged out. Waiting for processes to exit. Jan 23 19:32:58.164783 systemd-logind[1534]: Removed session 25. Jan 23 19:32:58.889907 kubelet[2837]: E0123 19:32:58.889802 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:33:01.895636 kubelet[2837]: E0123 19:33:01.893591 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:33:03.164150 systemd[1]: Started sshd@25-10.0.0.128:22-10.0.0.1:48756.service - OpenSSH per-connection server daemon (10.0.0.1:48756). Jan 23 19:33:03.275335 sshd[5794]: Accepted publickey for core from 10.0.0.1 port 48756 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:33:03.281748 sshd-session[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:03.295635 systemd-logind[1534]: New session 26 of user core. Jan 23 19:33:03.301463 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 19:33:03.578624 sshd[5797]: Connection closed by 10.0.0.1 port 48756 Jan 23 19:33:03.580817 sshd-session[5794]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:03.588008 systemd[1]: sshd@25-10.0.0.128:22-10.0.0.1:48756.service: Deactivated successfully. Jan 23 19:33:03.594173 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 19:33:03.596527 systemd-logind[1534]: Session 26 logged out. Waiting for processes to exit. Jan 23 19:33:03.600174 systemd-logind[1534]: Removed session 26. Jan 23 19:33:03.884093 kubelet[2837]: E0123 19:33:03.883919 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:33:06.891646 kubelet[2837]: E0123 19:33:06.886955 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:33:08.607938 systemd[1]: Started sshd@26-10.0.0.128:22-10.0.0.1:56358.service - OpenSSH per-connection server daemon (10.0.0.1:56358). Jan 23 19:33:08.816499 sshd[5817]: Accepted publickey for core from 10.0.0.1 port 56358 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:33:08.819379 sshd-session[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:08.843377 systemd-logind[1534]: New session 27 of user core. Jan 23 19:33:08.858883 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 19:33:09.169193 sshd[5820]: Connection closed by 10.0.0.1 port 56358 Jan 23 19:33:09.170656 sshd-session[5817]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:09.195189 systemd[1]: sshd@26-10.0.0.128:22-10.0.0.1:56358.service: Deactivated successfully. Jan 23 19:33:09.205844 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 19:33:09.209729 systemd-logind[1534]: Session 27 logged out. Waiting for processes to exit. Jan 23 19:33:09.212425 systemd-logind[1534]: Removed session 27. Jan 23 19:33:10.886501 kubelet[2837]: E0123 19:33:10.884691 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:33:10.886501 kubelet[2837]: E0123 19:33:10.884957 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:33:11.897904 kubelet[2837]: E0123 19:33:11.897810 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:33:11.905039 kubelet[2837]: E0123 19:33:11.903188 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:33:14.229366 systemd[1]: Started sshd@27-10.0.0.128:22-10.0.0.1:56372.service - OpenSSH per-connection server daemon (10.0.0.1:56372). Jan 23 19:33:14.387349 sshd[5841]: Accepted publickey for core from 10.0.0.1 port 56372 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:33:14.389732 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:14.435362 systemd-logind[1534]: New session 28 of user core. Jan 23 19:33:14.450420 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 19:33:14.876676 sshd[5844]: Connection closed by 10.0.0.1 port 56372 Jan 23 19:33:14.879189 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:14.890884 systemd[1]: sshd@27-10.0.0.128:22-10.0.0.1:56372.service: Deactivated successfully. Jan 23 19:33:14.899099 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 19:33:14.903154 systemd-logind[1534]: Session 28 logged out. Waiting for processes to exit. Jan 23 19:33:14.925836 systemd-logind[1534]: Removed session 28. Jan 23 19:33:15.938679 kubelet[2837]: E0123 19:33:15.938582 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:33:17.894146 kubelet[2837]: E0123 19:33:17.893970 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:33:19.900803 systemd[1]: Started sshd@28-10.0.0.128:22-10.0.0.1:52724.service - OpenSSH per-connection server daemon (10.0.0.1:52724). Jan 23 19:33:20.003455 sshd[5857]: Accepted publickey for core from 10.0.0.1 port 52724 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:33:20.004682 sshd-session[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:20.030898 systemd-logind[1534]: New session 29 of user core. Jan 23 19:33:20.038854 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 19:33:20.311337 sshd[5860]: Connection closed by 10.0.0.1 port 52724 Jan 23 19:33:20.312199 sshd-session[5857]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:20.322553 systemd-logind[1534]: Session 29 logged out. Waiting for processes to exit. Jan 23 19:33:20.324105 systemd[1]: sshd@28-10.0.0.128:22-10.0.0.1:52724.service: Deactivated successfully. Jan 23 19:33:20.332955 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 19:33:20.337791 systemd-logind[1534]: Removed session 29. Jan 23 19:33:21.888030 kubelet[2837]: E0123 19:33:21.887821 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:33:21.892451 containerd[1548]: time="2026-01-23T19:33:21.892410208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 19:33:21.899730 kubelet[2837]: E0123 19:33:21.892571 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:33:21.991962 containerd[1548]: time="2026-01-23T19:33:21.991842407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:33:21.995491 containerd[1548]: time="2026-01-23T19:33:21.995445664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 19:33:21.996584 containerd[1548]: time="2026-01-23T19:33:21.995653857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 19:33:21.997240 kubelet[2837]: E0123 19:33:21.996774 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:33:21.997240 kubelet[2837]: E0123 19:33:21.996827 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:33:21.997240 kubelet[2837]: E0123 19:33:21.997080 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67877fc7f5-bsvtq_calico-system(ae1ba4f6-1230-4757-8b1a-af9cfe7ac401): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 19:33:21.998885 kubelet[2837]: E0123 19:33:21.998490 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:33:22.891597 kubelet[2837]: E0123 19:33:22.891209 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:33:23.883677 kubelet[2837]: E0123 19:33:23.883635 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:33:24.884841 kubelet[2837]: E0123 19:33:24.884783 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:33:25.344131 systemd[1]: Started sshd@29-10.0.0.128:22-10.0.0.1:41006.service - OpenSSH per-connection server daemon (10.0.0.1:41006). Jan 23 19:33:25.475206 sshd[5875]: Accepted publickey for core from 10.0.0.1 port 41006 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:33:25.496605 sshd-session[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:25.518966 systemd-logind[1534]: New session 30 of user core. Jan 23 19:33:25.529889 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 23 19:33:25.950789 sshd[5878]: Connection closed by 10.0.0.1 port 41006 Jan 23 19:33:25.949241 sshd-session[5875]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:25.958736 systemd[1]: sshd@29-10.0.0.128:22-10.0.0.1:41006.service: Deactivated successfully. Jan 23 19:33:25.963108 systemd[1]: session-30.scope: Deactivated successfully. Jan 23 19:33:25.969011 systemd-logind[1534]: Session 30 logged out. Waiting for processes to exit. Jan 23 19:33:25.971965 systemd-logind[1534]: Removed session 30. Jan 23 19:33:28.888100 kubelet[2837]: E0123 19:33:28.887720 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:33:30.889108 containerd[1548]: time="2026-01-23T19:33:30.886851262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 19:33:30.957670 containerd[1548]: time="2026-01-23T19:33:30.957464991Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:33:30.960856 containerd[1548]: time="2026-01-23T19:33:30.960006037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 19:33:30.960856 containerd[1548]: time="2026-01-23T19:33:30.960152841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 19:33:30.961041 kubelet[2837]: E0123 19:33:30.960709 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:33:30.961041 kubelet[2837]: E0123 19:33:30.960773 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:33:30.961041 kubelet[2837]: E0123 19:33:30.960914 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7a483d5077ea476891ae22814fc1300c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v55kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77958bf869-kjvtc_calico-system(93f1f488-26e4-4256-ae3b-355d056de5e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 19:33:30.964481 containerd[1548]: time="2026-01-23T19:33:30.964244578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 19:33:30.979869 systemd[1]: Started sshd@30-10.0.0.128:22-10.0.0.1:41014.service - OpenSSH per-connection server daemon (10.0.0.1:41014). Jan 23 19:33:31.044337 containerd[1548]: time="2026-01-23T19:33:31.042953749Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:33:31.044992 containerd[1548]: time="2026-01-23T19:33:31.044843952Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 19:33:31.044992 containerd[1548]: time="2026-01-23T19:33:31.044973012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 19:33:31.045655 kubelet[2837]: E0123 19:33:31.045604 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:33:31.045924 kubelet[2837]: E0123 19:33:31.045812 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:33:31.046514 kubelet[2837]: E0123 19:33:31.046156 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v55kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77958bf869-kjvtc_calico-system(93f1f488-26e4-4256-ae3b-355d056de5e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 19:33:31.047485 kubelet[2837]: E0123 19:33:31.047384 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:33:31.135977 sshd[5917]: Accepted publickey for core from 10.0.0.1 port 41014 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:33:31.138427 sshd-session[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:31.152510 systemd-logind[1534]: New session 31 of user core. Jan 23 19:33:31.170879 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 23 19:33:31.425074 sshd[5920]: Connection closed by 10.0.0.1 port 41014 Jan 23 19:33:31.425713 sshd-session[5917]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:31.435673 systemd[1]: sshd@30-10.0.0.128:22-10.0.0.1:41014.service: Deactivated successfully. Jan 23 19:33:31.440136 systemd[1]: session-31.scope: Deactivated successfully. Jan 23 19:33:31.444121 systemd-logind[1534]: Session 31 logged out. Waiting for processes to exit. Jan 23 19:33:31.447397 systemd-logind[1534]: Removed session 31. Jan 23 19:33:33.922650 containerd[1548]: time="2026-01-23T19:33:33.922512478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 19:33:34.023125 containerd[1548]: time="2026-01-23T19:33:34.022713701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:33:34.034201 containerd[1548]: time="2026-01-23T19:33:34.032133966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 19:33:34.034201 containerd[1548]: time="2026-01-23T19:33:34.032255682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 19:33:34.034487 kubelet[2837]: E0123 19:33:34.032549 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:33:34.034487 kubelet[2837]: E0123 19:33:34.032662 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:33:34.034487 kubelet[2837]: E0123 19:33:34.032855 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvkh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pspcd_calico-system(7cbe68df-cea7-49bc-bbd7-253343631e45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 19:33:34.039332 containerd[1548]: time="2026-01-23T19:33:34.039207145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 19:33:34.208012 containerd[1548]: time="2026-01-23T19:33:34.207447393Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:33:34.227863 containerd[1548]: time="2026-01-23T19:33:34.226553466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 19:33:34.227863 containerd[1548]: time="2026-01-23T19:33:34.226741847Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 19:33:34.230832 kubelet[2837]: E0123 19:33:34.230772 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:33:34.239145 kubelet[2837]: E0123 19:33:34.239088 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:33:34.239674 kubelet[2837]: E0123 19:33:34.239558 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvkh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pspcd_calico-system(7cbe68df-cea7-49bc-bbd7-253343631e45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 19:33:34.254711 kubelet[2837]: E0123 19:33:34.242430 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:33:35.915258 kubelet[2837]: E0123 19:33:35.913697 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:33:35.926502 containerd[1548]: time="2026-01-23T19:33:35.921851427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 19:33:36.101540 containerd[1548]: time="2026-01-23T19:33:36.101229606Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:33:36.162137 containerd[1548]: time="2026-01-23T19:33:36.162058040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 19:33:36.170782 containerd[1548]: time="2026-01-23T19:33:36.168771799Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 19:33:36.170936 kubelet[2837]: E0123 19:33:36.170447 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:33:36.170936 kubelet[2837]: E0123 19:33:36.170511 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:33:36.170936 kubelet[2837]: E0123 19:33:36.170752 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdtmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r9djg_calico-system(92099955-c310-4dc6-a23c-2c8c618bc3b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 19:33:36.185341 kubelet[2837]: E0123 19:33:36.175806 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:33:36.484585 systemd[1]: Started sshd@31-10.0.0.128:22-10.0.0.1:59590.service - OpenSSH per-connection server daemon (10.0.0.1:59590). Jan 23 19:33:36.861441 sshd[5934]: Accepted publickey for core from 10.0.0.1 port 59590 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:33:36.876362 sshd-session[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:36.929806 systemd-logind[1534]: New session 32 of user core. Jan 23 19:33:36.973435 containerd[1548]: time="2026-01-23T19:33:36.908213825Z" level=warning msg="container event discarded" container=05c1c3c5ecd633a0de05562c350ec6cdac6edf0288cc9e027b03929433ca8b78 type=CONTAINER_CREATED_EVENT Jan 23 19:33:36.981037 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 23 19:33:37.060523 containerd[1548]: time="2026-01-23T19:33:37.060327686Z" level=warning msg="container event discarded" container=05c1c3c5ecd633a0de05562c350ec6cdac6edf0288cc9e027b03929433ca8b78 type=CONTAINER_STARTED_EVENT Jan 23 19:33:37.060523 containerd[1548]: time="2026-01-23T19:33:37.060395854Z" level=warning msg="container event discarded" container=e3dc99c3375444c96d6e70336e71368daa46984bf4b62963a7934080310bef57 type=CONTAINER_CREATED_EVENT Jan 23 19:33:37.060523 containerd[1548]: time="2026-01-23T19:33:37.060410250Z" level=warning msg="container event discarded" container=e3dc99c3375444c96d6e70336e71368daa46984bf4b62963a7934080310bef57 type=CONTAINER_STARTED_EVENT Jan 23 19:33:37.060523 containerd[1548]: time="2026-01-23T19:33:37.060419337Z" level=warning msg="container event discarded" container=6272c746af367a0d286031f4167070f5d920eb382bf226d9ce8c7bedb2428dd3 type=CONTAINER_CREATED_EVENT Jan 23 19:33:37.060523 containerd[1548]: time="2026-01-23T19:33:37.060430839Z" level=warning msg="container event discarded" container=6272c746af367a0d286031f4167070f5d920eb382bf226d9ce8c7bedb2428dd3 type=CONTAINER_STARTED_EVENT Jan 23 19:33:37.083848 containerd[1548]: time="2026-01-23T19:33:37.083646983Z" level=warning msg="container event discarded" container=1a01b9279ab14e243b11c69b33a0747e98d0f48a632e27d90840c2dad1b2de06 type=CONTAINER_CREATED_EVENT Jan 23 19:33:37.124083 containerd[1548]: time="2026-01-23T19:33:37.123569525Z" level=warning msg="container event discarded" container=637117f5780811e2727eef24bc4e1132e6cee8c776c38d7c1318d142a75008be type=CONTAINER_CREATED_EVENT Jan 23 19:33:37.124426 containerd[1548]: time="2026-01-23T19:33:37.124380526Z" level=warning msg="container event discarded" container=fbc71c2614684cdaf8d0e4b8b4175a59f2475769ae8881aeab8e178486ea1b65 type=CONTAINER_CREATED_EVENT Jan 23 19:33:37.534874 containerd[1548]: time="2026-01-23T19:33:37.534792476Z" level=warning msg="container event discarded" container=1a01b9279ab14e243b11c69b33a0747e98d0f48a632e27d90840c2dad1b2de06 type=CONTAINER_STARTED_EVENT Jan 23 19:33:37.547776 containerd[1548]: time="2026-01-23T19:33:37.547689847Z" level=warning msg="container event discarded" container=fbc71c2614684cdaf8d0e4b8b4175a59f2475769ae8881aeab8e178486ea1b65 type=CONTAINER_STARTED_EVENT Jan 23 19:33:37.913366 kubelet[2837]: E0123 19:33:37.909425 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:33:37.963940 containerd[1548]: time="2026-01-23T19:33:37.963890155Z" level=warning msg="container event discarded" container=637117f5780811e2727eef24bc4e1132e6cee8c776c38d7c1318d142a75008be type=CONTAINER_STARTED_EVENT Jan 23 19:33:37.981348 kubelet[2837]: E0123 19:33:37.981026 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:33:38.161382 sshd[5937]: Connection closed by 10.0.0.1 port 59590 Jan 23 19:33:38.177665 sshd-session[5934]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:38.208939 systemd[1]: sshd@31-10.0.0.128:22-10.0.0.1:59590.service: Deactivated successfully. Jan 23 19:33:38.235537 systemd[1]: session-32.scope: Deactivated successfully. Jan 23 19:33:38.255238 systemd-logind[1534]: Session 32 logged out. Waiting for processes to exit. Jan 23 19:33:38.271060 systemd[1]: Started sshd@32-10.0.0.128:22-10.0.0.1:59600.service - OpenSSH per-connection server daemon (10.0.0.1:59600). Jan 23 19:33:38.283840 systemd-logind[1534]: Removed session 32. Jan 23 19:33:38.467782 sshd[5965]: Accepted publickey for core from 10.0.0.1 port 59600 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:33:38.477010 sshd-session[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:38.516694 systemd-logind[1534]: New session 33 of user core. Jan 23 19:33:38.543896 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 23 19:33:39.899968 sshd[5968]: Connection closed by 10.0.0.1 port 59600 Jan 23 19:33:39.903861 sshd-session[5965]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:39.921887 systemd[1]: sshd@32-10.0.0.128:22-10.0.0.1:59600.service: Deactivated successfully. Jan 23 19:33:39.930998 systemd[1]: session-33.scope: Deactivated successfully. Jan 23 19:33:39.934839 systemd-logind[1534]: Session 33 logged out. Waiting for processes to exit. Jan 23 19:33:39.951478 systemd[1]: Started sshd@33-10.0.0.128:22-10.0.0.1:59602.service - OpenSSH per-connection server daemon (10.0.0.1:59602). Jan 23 19:33:39.960525 systemd-logind[1534]: Removed session 33. Jan 23 19:33:40.132146 sshd[5980]: Accepted publickey for core from 10.0.0.1 port 59602 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:33:40.145221 sshd-session[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:40.200540 systemd-logind[1534]: New session 34 of user core. Jan 23 19:33:40.219111 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 23 19:33:40.898478 containerd[1548]: time="2026-01-23T19:33:40.898433246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:33:41.029224 containerd[1548]: time="2026-01-23T19:33:41.029171455Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:33:41.045909 containerd[1548]: time="2026-01-23T19:33:41.045448935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:33:41.047227 kubelet[2837]: E0123 19:33:41.047146 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:33:41.047227 kubelet[2837]: E0123 19:33:41.047201 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:33:41.048245 containerd[1548]: time="2026-01-23T19:33:41.045812171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:33:41.053793 kubelet[2837]: E0123 19:33:41.053585 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqpf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545cbc66db-fpfjb_calico-apiserver(87c1e199-aab6-487a-be60-3401d4797307): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:33:41.058792 kubelet[2837]: E0123 19:33:41.057035 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:33:42.223859 sshd[5983]: Connection closed by 10.0.0.1 port 59602 Jan 23 19:33:42.224812 sshd-session[5980]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:42.252138 systemd[1]: sshd@33-10.0.0.128:22-10.0.0.1:59602.service: Deactivated successfully. Jan 23 19:33:42.255619 systemd[1]: session-34.scope: Deactivated successfully. Jan 23 19:33:42.261525 systemd-logind[1534]: Session 34 logged out. Waiting for processes to exit. Jan 23 19:33:42.266747 systemd[1]: Started sshd@34-10.0.0.128:22-10.0.0.1:59604.service - OpenSSH per-connection server daemon (10.0.0.1:59604). Jan 23 19:33:42.273704 systemd-logind[1534]: Removed session 34. Jan 23 19:33:42.397561 sshd[6014]: Accepted publickey for core from 10.0.0.1 port 59604 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:33:42.400694 sshd-session[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:42.409476 systemd-logind[1534]: New session 35 of user core. Jan 23 19:33:42.429212 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 23 19:33:43.051884 sshd[6017]: Connection closed by 10.0.0.1 port 59604 Jan 23 19:33:43.051704 sshd-session[6014]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:43.068701 systemd[1]: Started sshd@35-10.0.0.128:22-10.0.0.1:59614.service - OpenSSH per-connection server daemon (10.0.0.1:59614). Jan 23 19:33:43.069689 systemd[1]: sshd@34-10.0.0.128:22-10.0.0.1:59604.service: Deactivated successfully. Jan 23 19:33:43.076987 systemd[1]: session-35.scope: Deactivated successfully. Jan 23 19:33:43.083557 systemd-logind[1534]: Session 35 logged out. Waiting for processes to exit. Jan 23 19:33:43.087550 systemd-logind[1534]: Removed session 35. Jan 23 19:33:43.164348 sshd[6026]: Accepted publickey for core from 10.0.0.1 port 59614 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:33:43.168899 sshd-session[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:43.187837 systemd-logind[1534]: New session 36 of user core. Jan 23 19:33:43.195917 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 23 19:33:43.486721 sshd[6033]: Connection closed by 10.0.0.1 port 59614 Jan 23 19:33:43.487570 sshd-session[6026]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:43.501502 systemd[1]: sshd@35-10.0.0.128:22-10.0.0.1:59614.service: Deactivated successfully. Jan 23 19:33:43.506200 systemd[1]: session-36.scope: Deactivated successfully. Jan 23 19:33:43.512371 systemd-logind[1534]: Session 36 logged out. Waiting for processes to exit. Jan 23 19:33:43.516446 systemd-logind[1534]: Removed session 36. Jan 23 19:33:45.904197 kubelet[2837]: E0123 19:33:45.899783 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:33:46.893873 kubelet[2837]: E0123 19:33:46.893797 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:33:48.539457 systemd[1]: Started sshd@36-10.0.0.128:22-10.0.0.1:51578.service - OpenSSH per-connection server daemon (10.0.0.1:51578). Jan 23 19:33:48.747995 sshd[6054]: Accepted publickey for core from 10.0.0.1 port 51578 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:33:48.757247 sshd-session[6054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:48.790815 systemd-logind[1534]: New session 37 of user core. Jan 23 19:33:48.816846 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 23 19:33:48.887423 kubelet[2837]: E0123 19:33:48.884496 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:33:48.891753 kubelet[2837]: E0123 19:33:48.889169 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:33:48.903242 containerd[1548]: time="2026-01-23T19:33:48.902869765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:33:49.030497 containerd[1548]: time="2026-01-23T19:33:49.030441128Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:33:49.044162 containerd[1548]: time="2026-01-23T19:33:49.043930293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:33:49.044162 containerd[1548]: time="2026-01-23T19:33:49.044117733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:33:49.076974 kubelet[2837]: E0123 19:33:49.055504 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:33:49.076974 kubelet[2837]: E0123 19:33:49.055583 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:33:49.076974 kubelet[2837]: E0123 19:33:49.055809 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljmkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545cbc66db-s4jf2_calico-apiserver(87004552-13b2-409e-9fda-f933cdb145c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:33:49.076974 kubelet[2837]: E0123 19:33:49.058400 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:33:49.535896 sshd[6057]: Connection closed by 10.0.0.1 port 51578 Jan 23 19:33:49.530095 sshd-session[6054]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:49.567168 systemd[1]: sshd@36-10.0.0.128:22-10.0.0.1:51578.service: Deactivated successfully. Jan 23 19:33:49.572377 systemd-logind[1534]: Session 37 logged out. Waiting for processes to exit. Jan 23 19:33:49.584097 systemd[1]: session-37.scope: Deactivated successfully. Jan 23 19:33:49.603737 systemd-logind[1534]: Removed session 37. Jan 23 19:33:50.890186 kubelet[2837]: E0123 19:33:50.886497 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:33:50.896788 kubelet[2837]: E0123 19:33:50.894930 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:33:54.571193 systemd[1]: Started sshd@37-10.0.0.128:22-10.0.0.1:39882.service - OpenSSH per-connection server daemon (10.0.0.1:39882). Jan 23 19:33:54.788889 sshd[6071]: Accepted publickey for core from 10.0.0.1 port 39882 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:33:54.802811 sshd-session[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:33:54.822099 systemd-logind[1534]: New session 38 of user core. Jan 23 19:33:54.840390 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 23 19:33:54.913533 kubelet[2837]: E0123 19:33:54.912548 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:33:55.455942 sshd[6074]: Connection closed by 10.0.0.1 port 39882 Jan 23 19:33:55.461372 sshd-session[6071]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:55.478129 systemd-logind[1534]: Session 38 logged out. Waiting for processes to exit. Jan 23 19:33:55.491183 systemd[1]: sshd@37-10.0.0.128:22-10.0.0.1:39882.service: Deactivated successfully. Jan 23 19:33:55.505095 systemd[1]: session-38.scope: Deactivated successfully. Jan 23 19:33:55.537648 systemd-logind[1534]: Removed session 38. Jan 23 19:33:57.923793 kubelet[2837]: E0123 19:33:57.922785 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:34:00.486547 systemd[1]: Started sshd@38-10.0.0.128:22-10.0.0.1:39890.service - OpenSSH per-connection server daemon (10.0.0.1:39890). Jan 23 19:34:00.567558 sshd[6113]: Accepted publickey for core from 10.0.0.1 port 39890 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:34:00.570898 sshd-session[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:34:00.582666 systemd-logind[1534]: New session 39 of user core. Jan 23 19:34:00.596549 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 23 19:34:00.837642 sshd[6116]: Connection closed by 10.0.0.1 port 39890 Jan 23 19:34:00.837997 sshd-session[6113]: pam_unix(sshd:session): session closed for user core Jan 23 19:34:00.850974 systemd[1]: sshd@38-10.0.0.128:22-10.0.0.1:39890.service: Deactivated successfully. Jan 23 19:34:00.855818 systemd[1]: session-39.scope: Deactivated successfully. Jan 23 19:34:00.858407 systemd-logind[1534]: Session 39 logged out. Waiting for processes to exit. Jan 23 19:34:00.865346 systemd-logind[1534]: Removed session 39. Jan 23 19:34:00.884341 kubelet[2837]: E0123 19:34:00.884079 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:34:01.887987 kubelet[2837]: E0123 19:34:01.887695 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:34:02.886124 kubelet[2837]: E0123 19:34:02.886007 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:34:03.889065 kubelet[2837]: E0123 19:34:03.888616 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:34:04.889518 kubelet[2837]: E0123 19:34:04.888919 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:34:05.892596 systemd[1]: Started sshd@39-10.0.0.128:22-10.0.0.1:51096.service - OpenSSH per-connection server daemon (10.0.0.1:51096). Jan 23 19:34:06.058387 sshd[6131]: Accepted publickey for core from 10.0.0.1 port 51096 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:34:06.064713 sshd-session[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:34:06.088038 systemd-logind[1534]: New session 40 of user core. Jan 23 19:34:06.113049 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 23 19:34:06.412077 sshd[6134]: Connection closed by 10.0.0.1 port 51096 Jan 23 19:34:06.415510 sshd-session[6131]: pam_unix(sshd:session): session closed for user core Jan 23 19:34:06.432621 systemd[1]: sshd@39-10.0.0.128:22-10.0.0.1:51096.service: Deactivated successfully. Jan 23 19:34:06.441625 systemd[1]: session-40.scope: Deactivated successfully. Jan 23 19:34:06.453209 systemd-logind[1534]: Session 40 logged out. Waiting for processes to exit. Jan 23 19:34:06.455991 systemd-logind[1534]: Removed session 40. Jan 23 19:34:06.901195 kubelet[2837]: E0123 19:34:06.895401 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:34:08.430983 containerd[1548]: time="2026-01-23T19:34:08.430853053Z" level=warning msg="container event discarded" container=212c21e015393f0cb912a85ea026855767b1951843eab73fc4a42fa8e92f95f2 type=CONTAINER_CREATED_EVENT Jan 23 19:34:08.430983 containerd[1548]: time="2026-01-23T19:34:08.430944392Z" level=warning msg="container event discarded" container=212c21e015393f0cb912a85ea026855767b1951843eab73fc4a42fa8e92f95f2 type=CONTAINER_STARTED_EVENT Jan 23 19:34:08.706433 containerd[1548]: time="2026-01-23T19:34:08.705504602Z" level=warning msg="container event discarded" container=68aec34b8ee041f74476dd94758e2a79135bcc4d8e55bb4b78fbeefa2a272f81 type=CONTAINER_CREATED_EVENT Jan 23 19:34:08.920165 kubelet[2837]: E0123 19:34:08.912458 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:34:09.817134 containerd[1548]: time="2026-01-23T19:34:09.815023276Z" level=warning msg="container event discarded" container=68aec34b8ee041f74476dd94758e2a79135bcc4d8e55bb4b78fbeefa2a272f81 type=CONTAINER_STARTED_EVENT Jan 23 19:34:10.145698 containerd[1548]: time="2026-01-23T19:34:10.145529475Z" level=warning msg="container event discarded" container=ef445ca20768cd4f7fb55b4f891f7a1623b9579edd7b2ea5030210250295a88a type=CONTAINER_CREATED_EVENT Jan 23 19:34:10.145962 containerd[1548]: time="2026-01-23T19:34:10.145925604Z" level=warning msg="container event discarded" container=ef445ca20768cd4f7fb55b4f891f7a1623b9579edd7b2ea5030210250295a88a type=CONTAINER_STARTED_EVENT Jan 23 19:34:11.533421 systemd[1]: Started sshd@40-10.0.0.128:22-10.0.0.1:51100.service - OpenSSH per-connection server daemon (10.0.0.1:51100). Jan 23 19:34:11.755448 sshd[6151]: Accepted publickey for core from 10.0.0.1 port 51100 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:34:11.762595 sshd-session[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:34:11.809103 systemd-logind[1534]: New session 41 of user core. Jan 23 19:34:11.823143 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 23 19:34:12.190045 sshd[6154]: Connection closed by 10.0.0.1 port 51100 Jan 23 19:34:12.192101 sshd-session[6151]: pam_unix(sshd:session): session closed for user core Jan 23 19:34:12.213666 systemd[1]: sshd@40-10.0.0.128:22-10.0.0.1:51100.service: Deactivated successfully. Jan 23 19:34:12.229402 systemd[1]: session-41.scope: Deactivated successfully. Jan 23 19:34:12.256786 systemd-logind[1534]: Session 41 logged out. Waiting for processes to exit. Jan 23 19:34:12.258724 systemd-logind[1534]: Removed session 41. Jan 23 19:34:14.909378 kubelet[2837]: E0123 19:34:14.908928 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:34:15.907935 kubelet[2837]: E0123 19:34:15.907528 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pspcd" podUID="7cbe68df-cea7-49bc-bbd7-253343631e45" Jan 23 19:34:16.891518 kubelet[2837]: E0123 19:34:16.891380 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67877fc7f5-bsvtq" podUID="ae1ba4f6-1230-4757-8b1a-af9cfe7ac401" Jan 23 19:34:17.264612 systemd[1]: Started sshd@41-10.0.0.128:22-10.0.0.1:46108.service - OpenSSH per-connection server daemon (10.0.0.1:46108). Jan 23 19:34:17.444536 containerd[1548]: time="2026-01-23T19:34:17.441563555Z" level=warning msg="container event discarded" container=476a445c9f2593f2ef0d9a9338121307bc1eaa0a5e32cc3e0083d8675db85dcd type=CONTAINER_CREATED_EVENT Jan 23 19:34:17.504463 sshd[6168]: Accepted publickey for core from 10.0.0.1 port 46108 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:34:17.532040 sshd-session[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:34:17.573788 systemd-logind[1534]: New session 42 of user core. Jan 23 19:34:17.604138 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 23 19:34:17.735537 containerd[1548]: time="2026-01-23T19:34:17.734213921Z" level=warning msg="container event discarded" container=476a445c9f2593f2ef0d9a9338121307bc1eaa0a5e32cc3e0083d8675db85dcd type=CONTAINER_STARTED_EVENT Jan 23 19:34:17.892985 kubelet[2837]: E0123 19:34:17.892803 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r9djg" podUID="92099955-c310-4dc6-a23c-2c8c618bc3b8" Jan 23 19:34:17.944876 sshd[6171]: Connection closed by 10.0.0.1 port 46108 Jan 23 19:34:17.945595 sshd-session[6168]: pam_unix(sshd:session): session closed for user core Jan 23 19:34:17.954247 systemd-logind[1534]: Session 42 logged out. Waiting for processes to exit. Jan 23 19:34:17.960099 systemd[1]: sshd@41-10.0.0.128:22-10.0.0.1:46108.service: Deactivated successfully. Jan 23 19:34:17.963481 systemd[1]: session-42.scope: Deactivated successfully. Jan 23 19:34:17.973482 systemd-logind[1534]: Removed session 42. Jan 23 19:34:19.885709 kubelet[2837]: E0123 19:34:19.885024 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-fpfjb" podUID="87c1e199-aab6-487a-be60-3401d4797307" Jan 23 19:34:21.883963 kubelet[2837]: E0123 19:34:21.883924 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:34:22.961641 systemd[1]: Started sshd@42-10.0.0.128:22-10.0.0.1:46120.service - OpenSSH per-connection server daemon (10.0.0.1:46120). Jan 23 19:34:23.022424 sshd[6187]: Accepted publickey for core from 10.0.0.1 port 46120 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:34:23.024618 sshd-session[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:34:23.030337 systemd-logind[1534]: New session 43 of user core. Jan 23 19:34:23.037495 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 23 19:34:23.188951 sshd[6190]: Connection closed by 10.0.0.1 port 46120 Jan 23 19:34:23.189802 sshd-session[6187]: pam_unix(sshd:session): session closed for user core Jan 23 19:34:23.196182 systemd[1]: sshd@42-10.0.0.128:22-10.0.0.1:46120.service: Deactivated successfully. Jan 23 19:34:23.199538 systemd[1]: session-43.scope: Deactivated successfully. Jan 23 19:34:23.201430 systemd-logind[1534]: Session 43 logged out. Waiting for processes to exit. Jan 23 19:34:23.203489 systemd-logind[1534]: Removed session 43. Jan 23 19:34:23.883825 kubelet[2837]: E0123 19:34:23.883729 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:34:23.886096 kubelet[2837]: E0123 19:34:23.885941 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77958bf869-kjvtc" podUID="93f1f488-26e4-4256-ae3b-355d056de5e6" Jan 23 19:34:26.883734 kubelet[2837]: E0123 19:34:26.883193 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:34:27.885119 kubelet[2837]: E0123 19:34:27.885018 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545cbc66db-s4jf2" podUID="87004552-13b2-409e-9fda-f933cdb145c9" Jan 23 19:34:28.207257 systemd[1]: Started sshd@43-10.0.0.128:22-10.0.0.1:58374.service - OpenSSH per-connection server daemon (10.0.0.1:58374). Jan 23 19:34:28.274323 sshd[6204]: Accepted publickey for core from 10.0.0.1 port 58374 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:34:28.276724 sshd-session[6204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:34:28.285080 systemd-logind[1534]: New session 44 of user core. Jan 23 19:34:28.291037 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 23 19:34:28.434517 sshd[6207]: Connection closed by 10.0.0.1 port 58374 Jan 23 19:34:28.436603 sshd-session[6204]: pam_unix(sshd:session): session closed for user core Jan 23 19:34:28.441611 systemd[1]: sshd@43-10.0.0.128:22-10.0.0.1:58374.service: Deactivated successfully. Jan 23 19:34:28.443765 systemd[1]: session-44.scope: Deactivated successfully. Jan 23 19:34:28.445763 systemd-logind[1534]: Session 44 logged out. Waiting for processes to exit. Jan 23 19:34:28.447605 systemd-logind[1534]: Removed session 44.