Jan 23 01:09:25.986637 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:09:25.986686 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:09:25.986701 kernel: BIOS-provided physical RAM map: Jan 23 01:09:25.986715 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 23 01:09:25.986731 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 23 01:09:25.986741 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 01:09:25.986752 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 23 01:09:25.986763 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 23 01:09:25.986774 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 01:09:25.986802 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 01:09:25.986815 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 01:09:25.986826 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 01:09:25.986836 kernel: NX (Execute Disable) protection: active Jan 23 01:09:25.986853 kernel: APIC: Static calls initialized Jan 23 01:09:25.986865 kernel: SMBIOS 2.8 present. Jan 23 01:09:25.986877 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 23 01:09:25.986889 kernel: DMI: Memory slots populated: 1/1 Jan 23 01:09:25.986900 kernel: Hypervisor detected: KVM Jan 23 01:09:25.986911 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 23 01:09:25.986927 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 01:09:25.986962 kernel: kvm-clock: using sched offset of 6149845098 cycles Jan 23 01:09:25.986979 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 01:09:25.986991 kernel: tsc: Detected 2499.998 MHz processor Jan 23 01:09:25.987002 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:09:25.989038 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:09:25.989059 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 23 01:09:25.989071 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 01:09:25.989083 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:09:25.989103 kernel: Using GB pages for direct mapping Jan 23 01:09:25.989114 kernel: ACPI: Early table checksum verification disabled Jan 23 01:09:25.989126 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 23 01:09:25.989138 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:09:25.989149 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:09:25.989161 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:09:25.989173 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 23 01:09:25.989214 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:09:25.989249 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:09:25.989267 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:09:25.989279 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:09:25.989291 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 23 01:09:25.989308 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 23 01:09:25.989321 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 23 01:09:25.989333 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 23 01:09:25.989349 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 23 01:09:25.989362 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 23 01:09:25.989374 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 23 01:09:25.989386 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 23 01:09:25.989398 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 23 01:09:25.989410 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 23 01:09:25.989422 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 23 01:09:25.989434 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 23 01:09:25.989451 kernel: Zone ranges: Jan 23 01:09:25.989463 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:09:25.989475 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 23 01:09:25.989487 kernel: Normal empty Jan 23 01:09:25.989499 kernel: Device empty Jan 23 01:09:25.989511 kernel: Movable zone start for each node Jan 23 01:09:25.989523 kernel: Early memory node ranges Jan 23 01:09:25.989558 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 01:09:25.989572 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 23 01:09:25.989590 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 23 01:09:25.989602 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:09:25.989615 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 01:09:25.989627 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 23 01:09:25.989639 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 01:09:25.989651 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 01:09:25.989663 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 01:09:25.989675 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 01:09:25.989687 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 01:09:25.989698 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:09:25.989716 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 01:09:25.989728 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 01:09:25.989740 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:09:25.989751 kernel: TSC deadline timer available Jan 23 01:09:25.989763 kernel: CPU topo: Max. logical packages: 16 Jan 23 01:09:25.989775 kernel: CPU topo: Max. logical dies: 16 Jan 23 01:09:25.989787 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:09:25.989810 kernel: CPU topo: Max. threads per core: 1 Jan 23 01:09:25.989845 kernel: CPU topo: Num. cores per package: 1 Jan 23 01:09:25.989866 kernel: CPU topo: Num. threads per package: 1 Jan 23 01:09:25.989878 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 23 01:09:25.989890 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 01:09:25.989902 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 01:09:25.989914 kernel: Booting paravirtualized kernel on KVM Jan 23 01:09:25.989926 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:09:25.989938 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 23 01:09:25.989977 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 23 01:09:25.989991 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 23 01:09:25.990009 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 23 01:09:25.990035 kernel: kvm-guest: PV spinlocks enabled Jan 23 01:09:25.990048 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:09:25.990061 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:09:25.990074 kernel: random: crng init done Jan 23 01:09:25.990086 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 01:09:25.990098 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 01:09:25.990110 kernel: Fallback order for Node 0: 0 Jan 23 01:09:25.990128 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 23 01:09:25.990140 kernel: Policy zone: DMA32 Jan 23 01:09:25.990152 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:09:25.990164 kernel: software IO TLB: area num 16. Jan 23 01:09:25.990176 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 23 01:09:25.990189 kernel: Kernel/User page tables isolation: enabled Jan 23 01:09:25.990201 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:09:25.990239 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:09:25.990252 kernel: Dynamic Preempt: voluntary Jan 23 01:09:25.990270 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:09:25.990283 kernel: rcu: RCU event tracing is enabled. Jan 23 01:09:25.990295 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 23 01:09:25.990307 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:09:25.990319 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:09:25.990331 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:09:25.990344 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:09:25.990356 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 23 01:09:25.990368 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 01:09:25.990384 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 01:09:25.990397 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 01:09:25.990409 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 23 01:09:25.990421 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:09:25.990444 kernel: Console: colour VGA+ 80x25 Jan 23 01:09:25.990461 kernel: printk: legacy console [tty0] enabled Jan 23 01:09:25.990474 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:09:25.990486 kernel: ACPI: Core revision 20240827 Jan 23 01:09:25.990499 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:09:25.990511 kernel: x2apic enabled Jan 23 01:09:25.990524 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:09:25.990543 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 23 01:09:25.990560 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 23 01:09:25.990573 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 01:09:25.990609 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 01:09:25.990623 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 01:09:25.990636 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:09:25.990654 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 01:09:25.990667 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 01:09:25.990682 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 23 01:09:25.990695 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 01:09:25.990707 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 01:09:25.990719 kernel: MDS: Mitigation: Clear CPU buffers Jan 23 01:09:25.990731 kernel: MMIO Stale Data: Unknown: No mitigations Jan 23 01:09:25.990744 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 23 01:09:25.990756 kernel: active return thunk: its_return_thunk Jan 23 01:09:25.990768 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 01:09:25.990780 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:09:25.990836 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:09:25.990851 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:09:25.990863 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:09:25.990875 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 23 01:09:25.990888 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:09:25.990900 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:09:25.990912 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:09:25.990924 kernel: landlock: Up and running. Jan 23 01:09:25.990936 kernel: SELinux: Initializing. Jan 23 01:09:25.990948 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 01:09:25.990961 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 01:09:25.990973 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 23 01:09:25.990991 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 23 01:09:25.993071 kernel: signal: max sigframe size: 1776 Jan 23 01:09:25.993085 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:09:25.993099 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:09:25.993112 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 23 01:09:25.993126 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 01:09:25.993138 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:09:25.993151 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:09:25.993171 kernel: .... node #0, CPUs: #1 Jan 23 01:09:25.993192 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 01:09:25.993204 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 23 01:09:25.993218 kernel: Memory: 1887480K/2096616K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 203120K reserved, 0K cma-reserved) Jan 23 01:09:25.993231 kernel: devtmpfs: initialized Jan 23 01:09:25.993243 kernel: x86/mm: Memory block size: 128MB Jan 23 01:09:25.993261 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:09:25.993274 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 23 01:09:25.993286 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:09:25.993326 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:09:25.993347 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:09:25.993360 kernel: audit: type=2000 audit(1769130562.196:1): state=initialized audit_enabled=0 res=1 Jan 23 01:09:25.993372 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:09:25.993385 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:09:25.993397 kernel: cpuidle: using governor menu Jan 23 01:09:25.993410 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:09:25.993422 kernel: dca service started, version 1.12.1 Jan 23 01:09:25.993435 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 01:09:25.993447 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 01:09:25.993465 kernel: PCI: Using configuration type 1 for base access Jan 23 01:09:25.993478 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:09:25.993491 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:09:25.993503 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:09:25.993516 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:09:25.993528 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:09:25.993541 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:09:25.993553 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:09:25.993566 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:09:25.993583 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 01:09:25.993596 kernel: ACPI: Interpreter enabled Jan 23 01:09:25.993608 kernel: ACPI: PM: (supports S0 S5) Jan 23 01:09:25.993621 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:09:25.993633 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:09:25.993646 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 01:09:25.993659 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 01:09:25.993704 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 01:09:25.994097 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 01:09:25.994312 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 01:09:25.994511 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 01:09:25.994531 kernel: PCI host bridge to bus 0000:00 Jan 23 01:09:25.994750 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 01:09:25.994938 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 01:09:25.997136 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 01:09:25.997342 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 23 01:09:25.997499 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 01:09:25.997680 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 23 01:09:25.997901 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 01:09:25.998135 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 01:09:25.998396 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 23 01:09:25.998605 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 23 01:09:25.998810 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 23 01:09:25.998977 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 23 01:09:26.001217 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 01:09:26.001414 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:09:26.001642 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 23 01:09:26.001853 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 01:09:26.002097 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 23 01:09:26.002274 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 01:09:26.002533 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:09:26.002738 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 23 01:09:26.002947 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 01:09:26.005214 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 23 01:09:26.005431 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 01:09:26.005661 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:09:26.005877 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 23 01:09:26.006099 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 01:09:26.006300 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 23 01:09:26.006498 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 01:09:26.006718 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:09:26.006926 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 23 01:09:26.012777 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 01:09:26.013049 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 23 01:09:26.013227 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 01:09:26.013437 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:09:26.013638 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 23 01:09:26.013857 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 01:09:26.014073 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 23 01:09:26.014249 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 01:09:26.014498 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:09:26.014699 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 23 01:09:26.014955 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 01:09:26.015148 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 23 01:09:26.015368 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 01:09:26.015572 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:09:26.015776 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 23 01:09:26.015982 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 01:09:26.016172 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 23 01:09:26.016374 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 01:09:26.016604 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:09:26.016823 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 23 01:09:26.017074 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 01:09:26.017272 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 23 01:09:26.017477 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 01:09:26.017690 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 01:09:26.017886 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jan 23 01:09:26.019810 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 23 01:09:26.020040 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 23 01:09:26.020243 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 23 01:09:26.020440 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 01:09:26.020630 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jan 23 01:09:26.020857 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 23 01:09:26.021076 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 23 01:09:26.021310 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 01:09:26.021477 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 01:09:26.021689 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 01:09:26.021932 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jan 23 01:09:26.026031 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 23 01:09:26.026264 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 01:09:26.026469 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 01:09:26.026687 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 23 01:09:26.026905 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 23 01:09:26.027131 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 01:09:26.027329 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 23 01:09:26.027563 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 01:09:26.027788 kernel: pci_bus 0000:02: extended config space not accessible Jan 23 01:09:26.028063 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 23 01:09:26.028275 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 23 01:09:26.028480 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 01:09:26.028709 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 01:09:26.028894 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 23 01:09:26.031106 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 01:09:26.031305 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 01:09:26.031542 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 23 01:09:26.031762 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 01:09:26.032567 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 01:09:26.032771 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 01:09:26.032975 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 01:09:26.033193 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 01:09:26.033372 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 01:09:26.033394 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 01:09:26.033408 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 01:09:26.033421 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 01:09:26.033452 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 01:09:26.033465 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 01:09:26.033479 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 01:09:26.033492 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 01:09:26.033504 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 01:09:26.033517 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 01:09:26.033530 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 01:09:26.033542 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 01:09:26.033555 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 01:09:26.033572 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 01:09:26.033597 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 01:09:26.033610 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 01:09:26.033622 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 01:09:26.033634 kernel: iommu: Default domain type: Translated Jan 23 01:09:26.033646 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:09:26.033670 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:09:26.033681 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 01:09:26.033693 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 23 01:09:26.033709 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 23 01:09:26.033892 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 01:09:26.034157 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 01:09:26.034374 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 01:09:26.034397 kernel: vgaarb: loaded Jan 23 01:09:26.034411 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 01:09:26.034425 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:09:26.034439 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:09:26.034462 kernel: pnp: PnP ACPI init Jan 23 01:09:26.034645 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 01:09:26.034667 kernel: pnp: PnP ACPI: found 5 devices Jan 23 01:09:26.034680 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:09:26.034694 kernel: NET: Registered PF_INET protocol family Jan 23 01:09:26.034707 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 01:09:26.034721 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 23 01:09:26.034734 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:09:26.034754 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 01:09:26.034768 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 23 01:09:26.034781 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 23 01:09:26.034807 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 01:09:26.034822 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 01:09:26.034834 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:09:26.034847 kernel: NET: Registered PF_XDP protocol family Jan 23 01:09:26.035028 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 23 01:09:26.035198 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 01:09:26.035371 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 01:09:26.035535 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 01:09:26.035699 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 01:09:26.035889 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 01:09:26.037130 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 01:09:26.037420 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 01:09:26.037601 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 23 01:09:26.040207 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 23 01:09:26.040403 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 01:09:26.040569 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 23 01:09:26.040776 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 23 01:09:26.040983 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 23 01:09:26.041166 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 23 01:09:26.041330 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 01:09:26.041499 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 01:09:26.041701 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 23 01:09:26.041877 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 01:09:26.042066 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 23 01:09:26.042231 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 23 01:09:26.042393 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 01:09:26.042555 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 01:09:26.045104 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 23 01:09:26.045313 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 23 01:09:26.045479 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 01:09:26.045643 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 01:09:26.045827 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 23 01:09:26.045999 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 23 01:09:26.046194 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 01:09:26.046382 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 01:09:26.046545 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 23 01:09:26.046715 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 23 01:09:26.046890 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 01:09:26.047074 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 01:09:26.047238 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 23 01:09:26.047400 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 23 01:09:26.047562 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 01:09:26.047734 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 01:09:26.047912 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 23 01:09:26.048092 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 23 01:09:26.048255 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 01:09:26.048417 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 01:09:26.048579 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 23 01:09:26.048760 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 23 01:09:26.048937 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 01:09:26.049122 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 01:09:26.049293 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 23 01:09:26.049462 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 23 01:09:26.049623 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 01:09:26.049781 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 01:09:26.049944 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 01:09:26.050118 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 01:09:26.050266 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 23 01:09:26.050413 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 01:09:26.050599 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 23 01:09:26.050781 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 23 01:09:26.050950 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 23 01:09:26.051143 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 01:09:26.051317 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 23 01:09:26.051491 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 23 01:09:26.051645 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 23 01:09:26.051821 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 01:09:26.051987 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 23 01:09:26.052168 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 23 01:09:26.052328 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 01:09:26.052508 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 23 01:09:26.052662 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 23 01:09:26.052826 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 01:09:26.053000 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 23 01:09:26.053176 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 23 01:09:26.053348 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 01:09:26.053516 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 23 01:09:26.053669 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 23 01:09:26.053844 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 01:09:26.054009 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 23 01:09:26.054195 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 23 01:09:26.054359 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 01:09:26.054521 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 23 01:09:26.054673 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 23 01:09:26.054845 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 01:09:26.054868 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 01:09:26.054882 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:09:26.054903 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 01:09:26.054917 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 23 01:09:26.054931 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 01:09:26.054945 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 23 01:09:26.054963 kernel: Initialise system trusted keyrings Jan 23 01:09:26.054977 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 23 01:09:26.054990 kernel: Key type asymmetric registered Jan 23 01:09:26.055003 kernel: Asymmetric key parser 'x509' registered Jan 23 01:09:26.055038 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:09:26.055053 kernel: io scheduler mq-deadline registered Jan 23 01:09:26.055067 kernel: io scheduler kyber registered Jan 23 01:09:26.055085 kernel: io scheduler bfq registered Jan 23 01:09:26.055248 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 23 01:09:26.055411 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 23 01:09:26.055575 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:09:26.055745 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 23 01:09:26.055930 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 23 01:09:26.056133 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:09:26.056297 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 23 01:09:26.056465 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 23 01:09:26.056627 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:09:26.056788 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 23 01:09:26.056981 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 23 01:09:26.057163 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:09:26.057326 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 23 01:09:26.057496 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 23 01:09:26.057659 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:09:26.057833 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 23 01:09:26.058002 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 23 01:09:26.058190 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:09:26.058384 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 23 01:09:26.058548 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 23 01:09:26.058721 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:09:26.058896 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 23 01:09:26.059089 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 23 01:09:26.059252 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:09:26.059273 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:09:26.059288 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 01:09:26.059302 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 01:09:26.059316 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:09:26.059330 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:09:26.059350 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 01:09:26.059364 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 01:09:26.059378 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 01:09:26.059391 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 01:09:26.059591 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 01:09:26.059747 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 01:09:26.059914 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T01:09:25 UTC (1769130565) Jan 23 01:09:26.060094 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 23 01:09:26.060122 kernel: intel_pstate: CPU model not supported Jan 23 01:09:26.060136 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:09:26.060150 kernel: Segment Routing with IPv6 Jan 23 01:09:26.060164 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:09:26.060177 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:09:26.060191 kernel: Key type dns_resolver registered Jan 23 01:09:26.060204 kernel: IPI shorthand broadcast: enabled Jan 23 01:09:26.060217 kernel: sched_clock: Marking stable (3545003921, 224521364)->(3897985045, -128459760) Jan 23 01:09:26.060231 kernel: registered taskstats version 1 Jan 23 01:09:26.060249 kernel: Loading compiled-in X.509 certificates Jan 23 01:09:26.060263 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:09:26.060276 kernel: Demotion targets for Node 0: null Jan 23 01:09:26.060289 kernel: Key type .fscrypt registered Jan 23 01:09:26.060303 kernel: Key type fscrypt-provisioning registered Jan 23 01:09:26.060316 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:09:26.060329 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:09:26.060343 kernel: ima: No architecture policies found Jan 23 01:09:26.060356 kernel: clk: Disabling unused clocks Jan 23 01:09:26.060374 kernel: Warning: unable to open an initial console. Jan 23 01:09:26.060388 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:09:26.060401 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:09:26.060415 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:09:26.060433 kernel: Run /init as init process Jan 23 01:09:26.060447 kernel: with arguments: Jan 23 01:09:26.060460 kernel: /init Jan 23 01:09:26.060473 kernel: with environment: Jan 23 01:09:26.060486 kernel: HOME=/ Jan 23 01:09:26.060499 kernel: TERM=linux Jan 23 01:09:26.060527 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:09:26.060585 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:09:26.060602 systemd[1]: Detected virtualization kvm. Jan 23 01:09:26.060616 systemd[1]: Detected architecture x86-64. Jan 23 01:09:26.060629 systemd[1]: Running in initrd. Jan 23 01:09:26.060643 systemd[1]: No hostname configured, using default hostname. Jan 23 01:09:26.060658 systemd[1]: Hostname set to . Jan 23 01:09:26.060679 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:09:26.060694 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:09:26.060708 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:09:26.060722 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:09:26.060738 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:09:26.060753 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:09:26.060767 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:09:26.060788 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:09:26.060816 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:09:26.060830 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:09:26.060845 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:09:26.060859 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:09:26.060873 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:09:26.060887 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:09:26.060902 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:09:26.060922 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:09:26.060937 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:09:26.060951 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:09:26.060966 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:09:26.060980 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:09:26.060995 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:09:26.061009 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:09:26.061042 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:09:26.061064 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:09:26.061078 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:09:26.061093 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:09:26.061107 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:09:26.061122 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:09:26.061136 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:09:26.061151 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:09:26.061165 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:09:26.061180 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:09:26.061200 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:09:26.061216 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:09:26.061296 systemd-journald[209]: Collecting audit messages is disabled. Jan 23 01:09:26.061338 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:09:26.061353 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:09:26.061368 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:09:26.061382 kernel: Bridge firewalling registered Jan 23 01:09:26.061398 systemd-journald[209]: Journal started Jan 23 01:09:26.061437 systemd-journald[209]: Runtime Journal (/run/log/journal/b579f76a320a49eaa0037f3272b40ddd) is 4.7M, max 37.8M, 33.1M free. Jan 23 01:09:25.975090 systemd-modules-load[211]: Inserted module 'overlay' Jan 23 01:09:26.045253 systemd-modules-load[211]: Inserted module 'br_netfilter' Jan 23 01:09:26.097428 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:09:26.099560 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:09:26.100733 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:09:26.108209 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:09:26.113184 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:09:26.115274 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:09:26.117707 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:09:26.127228 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:09:26.139186 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:09:26.147712 systemd-tmpfiles[228]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:09:26.157341 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:09:26.159489 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:09:26.164211 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:09:26.167176 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:09:26.169064 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:09:26.201500 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:09:26.227544 systemd-resolved[249]: Positive Trust Anchors: Jan 23 01:09:26.227575 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:09:26.227632 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:09:26.232493 systemd-resolved[249]: Defaulting to hostname 'linux'. Jan 23 01:09:26.234462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:09:26.237907 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:09:26.335071 kernel: SCSI subsystem initialized Jan 23 01:09:26.348055 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:09:26.363047 kernel: iscsi: registered transport (tcp) Jan 23 01:09:26.390076 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:09:26.390159 kernel: QLogic iSCSI HBA Driver Jan 23 01:09:26.417245 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:09:26.438224 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:09:26.441213 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:09:26.507644 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:09:26.510711 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:09:26.574082 kernel: raid6: sse2x4 gen() 13404 MB/s Jan 23 01:09:26.592067 kernel: raid6: sse2x2 gen() 9499 MB/s Jan 23 01:09:26.610624 kernel: raid6: sse2x1 gen() 9789 MB/s Jan 23 01:09:26.610667 kernel: raid6: using algorithm sse2x4 gen() 13404 MB/s Jan 23 01:09:26.629803 kernel: raid6: .... xor() 7724 MB/s, rmw enabled Jan 23 01:09:26.629853 kernel: raid6: using ssse3x2 recovery algorithm Jan 23 01:09:26.656062 kernel: xor: automatically using best checksumming function avx Jan 23 01:09:26.848087 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:09:26.856954 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:09:26.860012 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:09:26.892430 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jan 23 01:09:26.903041 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:09:26.905785 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:09:26.933294 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Jan 23 01:09:26.968784 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:09:26.971372 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:09:27.088101 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:09:27.091372 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:09:27.198050 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 23 01:09:27.203281 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 23 01:09:27.220187 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 01:09:27.220227 kernel: GPT:17805311 != 125829119 Jan 23 01:09:27.220246 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 01:09:27.220285 kernel: GPT:17805311 != 125829119 Jan 23 01:09:27.220301 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 01:09:27.220330 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:09:27.225037 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:09:27.242040 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 01:09:27.277636 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:09:27.277829 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:09:27.280584 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:09:27.282603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:09:27.285239 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:09:27.291038 kernel: AES CTR mode by8 optimization enabled Jan 23 01:09:27.296048 kernel: libata version 3.00 loaded. Jan 23 01:09:27.319049 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 01:09:27.323825 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 01:09:27.329081 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 01:09:27.329316 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 01:09:27.329517 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 01:09:27.344046 kernel: ACPI: bus type USB registered Jan 23 01:09:27.346063 kernel: scsi host0: ahci Jan 23 01:09:27.358151 kernel: usbcore: registered new interface driver usbfs Jan 23 01:09:27.358192 kernel: scsi host1: ahci Jan 23 01:09:27.362056 kernel: scsi host2: ahci Jan 23 01:09:27.362625 kernel: usbcore: registered new interface driver hub Jan 23 01:09:27.362650 kernel: usbcore: registered new device driver usb Jan 23 01:09:27.362668 kernel: scsi host3: ahci Jan 23 01:09:27.364316 kernel: scsi host4: ahci Jan 23 01:09:27.364519 kernel: scsi host5: ahci Jan 23 01:09:27.364714 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 lpm-pol 1 Jan 23 01:09:27.364735 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 lpm-pol 1 Jan 23 01:09:27.364773 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 lpm-pol 1 Jan 23 01:09:27.364793 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 lpm-pol 1 Jan 23 01:09:27.364810 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 lpm-pol 1 Jan 23 01:09:27.364827 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 lpm-pol 1 Jan 23 01:09:27.438837 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 01:09:27.499703 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 01:09:27.501187 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:09:27.517663 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 01:09:27.529339 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 01:09:27.530238 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 01:09:27.533927 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:09:27.551332 disk-uuid[615]: Primary Header is updated. Jan 23 01:09:27.551332 disk-uuid[615]: Secondary Entries is updated. Jan 23 01:09:27.551332 disk-uuid[615]: Secondary Header is updated. Jan 23 01:09:27.557092 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:09:27.671195 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 01:09:27.671258 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 01:09:27.677051 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 01:09:27.677098 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 01:09:27.683882 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 01:09:27.683923 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 01:09:27.707046 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 23 01:09:27.723041 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 23 01:09:27.732138 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 01:09:27.732392 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 23 01:09:27.734490 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 23 01:09:27.736365 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 23 01:09:27.738330 kernel: hub 1-0:1.0: USB hub found Jan 23 01:09:27.739462 kernel: hub 1-0:1.0: 4 ports detected Jan 23 01:09:27.741432 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 01:09:27.745487 kernel: hub 2-0:1.0: USB hub found Jan 23 01:09:27.745771 kernel: hub 2-0:1.0: 4 ports detected Jan 23 01:09:27.761053 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:09:27.763930 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:09:27.765808 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:09:27.767448 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:09:27.769264 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:09:27.801073 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:09:27.977098 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 01:09:28.118167 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 01:09:28.124277 kernel: usbcore: registered new interface driver usbhid Jan 23 01:09:28.124321 kernel: usbhid: USB HID core driver Jan 23 01:09:28.132052 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 23 01:09:28.136048 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 23 01:09:28.569555 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:09:28.570981 disk-uuid[616]: The operation has completed successfully. Jan 23 01:09:28.649874 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:09:28.651123 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:09:28.699011 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:09:28.726420 sh[642]: Success Jan 23 01:09:28.752967 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:09:28.753111 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:09:28.756209 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:09:28.775052 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Jan 23 01:09:28.818266 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:09:28.826190 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:09:28.841217 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:09:28.856069 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (654) Jan 23 01:09:28.857063 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:09:28.859208 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:09:28.880748 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:09:28.880804 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:09:28.882579 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:09:28.883895 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:09:28.884807 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:09:28.885906 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:09:28.891198 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:09:28.920051 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (687) Jan 23 01:09:28.925048 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:09:28.925084 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:09:28.931069 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:09:28.931118 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:09:28.939036 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:09:28.941064 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:09:28.943520 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:09:29.038612 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:09:29.051264 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:09:29.115618 systemd-networkd[823]: lo: Link UP Jan 23 01:09:29.116598 systemd-networkd[823]: lo: Gained carrier Jan 23 01:09:29.120473 systemd-networkd[823]: Enumeration completed Jan 23 01:09:29.121291 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:09:29.122244 systemd[1]: Reached target network.target - Network. Jan 23 01:09:29.124443 systemd-networkd[823]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:09:29.124450 systemd-networkd[823]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:09:29.126861 systemd-networkd[823]: eth0: Link UP Jan 23 01:09:29.128763 systemd-networkd[823]: eth0: Gained carrier Jan 23 01:09:29.128780 systemd-networkd[823]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:09:29.154166 systemd-networkd[823]: eth0: DHCPv4 address 10.230.15.178/30, gateway 10.230.15.177 acquired from 10.230.15.177 Jan 23 01:09:29.169869 ignition[740]: Ignition 2.22.0 Jan 23 01:09:29.169890 ignition[740]: Stage: fetch-offline Jan 23 01:09:29.169971 ignition[740]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:09:29.169990 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:09:29.172324 ignition[740]: parsed url from cmdline: "" Jan 23 01:09:29.172332 ignition[740]: no config URL provided Jan 23 01:09:29.172342 ignition[740]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:09:29.172358 ignition[740]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:09:29.176443 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:09:29.172375 ignition[740]: failed to fetch config: resource requires networking Jan 23 01:09:29.179468 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 01:09:29.172635 ignition[740]: Ignition finished successfully Jan 23 01:09:29.219525 ignition[832]: Ignition 2.22.0 Jan 23 01:09:29.219552 ignition[832]: Stage: fetch Jan 23 01:09:29.219825 ignition[832]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:09:29.219845 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:09:29.219991 ignition[832]: parsed url from cmdline: "" Jan 23 01:09:29.219998 ignition[832]: no config URL provided Jan 23 01:09:29.220009 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:09:29.220054 ignition[832]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:09:29.220282 ignition[832]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 23 01:09:29.220457 ignition[832]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 01:09:29.220493 ignition[832]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 01:09:29.241856 ignition[832]: GET result: OK Jan 23 01:09:29.242257 ignition[832]: parsing config with SHA512: d13c3f2355bb165d9e8fb2022f085aac81cc76b1a6cee25c3dabe6260229b2e4bbab2383dfc34eb4376b3566d8eeb8601a6a3cdb7db0a1a0ee37f56b96750add Jan 23 01:09:29.249909 unknown[832]: fetched base config from "system" Jan 23 01:09:29.249930 unknown[832]: fetched base config from "system" Jan 23 01:09:29.250528 ignition[832]: fetch: fetch complete Jan 23 01:09:29.249939 unknown[832]: fetched user config from "openstack" Jan 23 01:09:29.250536 ignition[832]: fetch: fetch passed Jan 23 01:09:29.253470 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 01:09:29.250612 ignition[832]: Ignition finished successfully Jan 23 01:09:29.257363 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:09:29.303971 ignition[839]: Ignition 2.22.0 Jan 23 01:09:29.304001 ignition[839]: Stage: kargs Jan 23 01:09:29.304281 ignition[839]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:09:29.304301 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:09:29.306351 ignition[839]: kargs: kargs passed Jan 23 01:09:29.306434 ignition[839]: Ignition finished successfully Jan 23 01:09:29.309612 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:09:29.313326 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:09:29.351224 ignition[845]: Ignition 2.22.0 Jan 23 01:09:29.351249 ignition[845]: Stage: disks Jan 23 01:09:29.351460 ignition[845]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:09:29.351481 ignition[845]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:09:29.352532 ignition[845]: disks: disks passed Jan 23 01:09:29.354360 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:09:29.352609 ignition[845]: Ignition finished successfully Jan 23 01:09:29.356320 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:09:29.357612 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:09:29.359005 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:09:29.360521 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:09:29.362098 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:09:29.366178 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:09:29.412045 systemd-fsck[853]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 01:09:29.414897 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:09:29.418342 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:09:29.558566 kernel: EXT4-fs (vda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:09:29.559883 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:09:29.561489 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:09:29.565068 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:09:29.569116 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:09:29.571079 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 01:09:29.577189 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 23 01:09:29.578964 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:09:29.579025 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:09:29.586467 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:09:29.593814 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:09:29.601045 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (861) Jan 23 01:09:29.610089 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:09:29.610142 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:09:29.621303 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:09:29.621352 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:09:29.633741 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:09:29.681160 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:09:29.691777 initrd-setup-root[890]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:09:29.701341 initrd-setup-root[897]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:09:29.709665 initrd-setup-root[904]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:09:29.716033 initrd-setup-root[911]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:09:29.835675 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:09:29.838339 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:09:29.841201 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:09:29.863355 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:09:29.865970 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:09:29.887624 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:09:29.920528 ignition[979]: INFO : Ignition 2.22.0 Jan 23 01:09:29.921949 ignition[979]: INFO : Stage: mount Jan 23 01:09:29.921949 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:09:29.921949 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:09:29.924531 ignition[979]: INFO : mount: mount passed Jan 23 01:09:29.924531 ignition[979]: INFO : Ignition finished successfully Jan 23 01:09:29.924200 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:09:30.583372 systemd-networkd[823]: eth0: Gained IPv6LL Jan 23 01:09:30.717066 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:09:32.089637 systemd-networkd[823]: eth0: Ignoring DHCPv6 address 2a02:1348:179:83ec:24:19ff:fee6:fb2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:83ec:24:19ff:fee6:fb2/64 assigned by NDisc. Jan 23 01:09:32.089653 systemd-networkd[823]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 23 01:09:32.729058 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:09:36.737062 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:09:36.743945 coreos-metadata[863]: Jan 23 01:09:36.743 WARN failed to locate config-drive, using the metadata service API instead Jan 23 01:09:36.770704 coreos-metadata[863]: Jan 23 01:09:36.770 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 01:09:36.789456 coreos-metadata[863]: Jan 23 01:09:36.789 INFO Fetch successful Jan 23 01:09:36.790508 coreos-metadata[863]: Jan 23 01:09:36.790 INFO wrote hostname srv-p26ko.gb1.brightbox.com to /sysroot/etc/hostname Jan 23 01:09:36.795510 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 23 01:09:36.795731 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 23 01:09:36.800179 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:09:36.833068 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:09:36.869070 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (995) Jan 23 01:09:36.874917 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:09:36.874958 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:09:36.881126 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:09:36.881173 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:09:36.884838 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:09:36.929095 ignition[1013]: INFO : Ignition 2.22.0 Jan 23 01:09:36.929095 ignition[1013]: INFO : Stage: files Jan 23 01:09:36.931139 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:09:36.931139 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:09:36.931139 ignition[1013]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:09:36.933888 ignition[1013]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:09:36.933888 ignition[1013]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:09:36.936101 ignition[1013]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:09:36.943105 ignition[1013]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:09:36.943105 ignition[1013]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:09:36.943105 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 01:09:36.943105 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 01:09:36.937467 unknown[1013]: wrote ssh authorized keys file for user: core Jan 23 01:09:37.154552 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 01:09:37.564860 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 01:09:37.567548 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:09:37.567548 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:09:37.567548 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:09:37.567548 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:09:37.567548 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:09:37.567548 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:09:37.567548 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:09:37.567548 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:09:37.576842 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:09:37.577969 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:09:37.577969 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:09:37.580550 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:09:37.580550 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:09:37.580550 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 23 01:09:37.988760 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 01:09:40.993309 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:09:40.993309 ignition[1013]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 01:09:40.999140 ignition[1013]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:09:41.003332 ignition[1013]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:09:41.004699 ignition[1013]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 01:09:41.004699 ignition[1013]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 01:09:41.004699 ignition[1013]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 01:09:41.004699 ignition[1013]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:09:41.004699 ignition[1013]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:09:41.004699 ignition[1013]: INFO : files: files passed Jan 23 01:09:41.004699 ignition[1013]: INFO : Ignition finished successfully Jan 23 01:09:41.007276 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:09:41.014265 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:09:41.018254 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:09:41.036231 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:09:41.036423 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:09:41.047873 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:09:41.047873 initrd-setup-root-after-ignition[1043]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:09:41.051853 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:09:41.053345 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:09:41.055206 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:09:41.057239 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:09:41.123293 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:09:41.123532 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:09:41.125451 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:09:41.126687 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:09:41.128440 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:09:41.131186 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:09:41.162951 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:09:41.166434 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:09:41.191538 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:09:41.193566 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:09:41.195536 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:09:41.196319 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:09:41.196539 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:09:41.198437 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:09:41.199328 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:09:41.201082 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:09:41.202467 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:09:41.203936 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:09:41.205597 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:09:41.207394 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:09:41.209033 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:09:41.210735 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:09:41.212274 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:09:41.213835 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:09:41.215412 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:09:41.215680 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:09:41.217250 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:09:41.218250 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:09:41.219786 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:09:41.220194 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:09:41.221364 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:09:41.221573 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:09:41.223714 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:09:41.223978 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:09:41.225657 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:09:41.225882 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:09:41.229203 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:09:41.235359 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:09:41.236157 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:09:41.246313 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:09:41.249037 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:09:41.249327 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:09:41.250321 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:09:41.250567 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:09:41.260448 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:09:41.262141 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:09:41.283036 ignition[1067]: INFO : Ignition 2.22.0 Jan 23 01:09:41.283036 ignition[1067]: INFO : Stage: umount Jan 23 01:09:41.286681 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:09:41.286681 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:09:41.286681 ignition[1067]: INFO : umount: umount passed Jan 23 01:09:41.286681 ignition[1067]: INFO : Ignition finished successfully Jan 23 01:09:41.287600 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:09:41.288139 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:09:41.289268 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:09:41.289344 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:09:41.290801 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:09:41.290870 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:09:41.292984 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 01:09:41.293123 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 01:09:41.294943 systemd[1]: Stopped target network.target - Network. Jan 23 01:09:41.295608 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:09:41.295700 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:09:41.297246 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:09:41.298189 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:09:41.302092 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:09:41.304288 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:09:41.305798 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:09:41.307465 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:09:41.307536 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:09:41.308748 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:09:41.308805 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:09:41.310232 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:09:41.310315 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:09:41.311964 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:09:41.312057 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:09:41.313559 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:09:41.315910 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:09:41.318464 systemd-networkd[823]: eth0: DHCPv6 lease lost Jan 23 01:09:41.319600 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:09:41.322459 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:09:41.322611 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:09:41.325687 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:09:41.325899 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:09:41.331208 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:09:41.331565 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:09:41.331778 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:09:41.334602 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:09:41.336008 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:09:41.336841 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:09:41.337534 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:09:41.338424 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:09:41.338509 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:09:41.341179 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:09:41.342678 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:09:41.342750 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:09:41.345423 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:09:41.345493 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:09:41.347526 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:09:41.347597 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:09:41.351185 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:09:41.351258 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:09:41.354312 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:09:41.357858 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:09:41.357948 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:09:41.368926 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:09:41.371730 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:09:41.372935 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:09:41.373005 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:09:41.374798 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:09:41.374850 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:09:41.376458 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:09:41.376530 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:09:41.378775 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:09:41.378848 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:09:41.380422 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:09:41.380497 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:09:41.383437 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:09:41.386358 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:09:41.386448 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:09:41.388468 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:09:41.388535 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:09:41.391239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:09:41.391394 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:09:41.399000 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 01:09:41.399114 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 01:09:41.399230 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:09:41.399898 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:09:41.402091 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:09:41.409815 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:09:41.409982 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:09:41.412366 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:09:41.414848 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:09:41.454866 systemd[1]: Switching root. Jan 23 01:09:41.504557 systemd-journald[209]: Journal stopped Jan 23 01:09:43.342259 systemd-journald[209]: Received SIGTERM from PID 1 (systemd). Jan 23 01:09:43.342463 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:09:43.342512 kernel: SELinux: policy capability open_perms=1 Jan 23 01:09:43.342534 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:09:43.342572 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:09:43.342600 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:09:43.342654 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:09:43.342687 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:09:43.342713 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:09:43.342746 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:09:43.342767 kernel: audit: type=1403 audit(1769130581.945:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:09:43.342812 systemd[1]: Successfully loaded SELinux policy in 79.210ms. Jan 23 01:09:43.342863 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.716ms. Jan 23 01:09:43.342906 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:09:43.342929 systemd[1]: Detected virtualization kvm. Jan 23 01:09:43.342956 systemd[1]: Detected architecture x86-64. Jan 23 01:09:43.342985 systemd[1]: Detected first boot. Jan 23 01:09:43.355692 systemd[1]: Hostname set to . Jan 23 01:09:43.355747 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:09:43.355930 zram_generator::config[1110]: No configuration found. Jan 23 01:09:43.355987 kernel: Guest personality initialized and is inactive Jan 23 01:09:43.356172 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 01:09:43.356207 kernel: Initialized host personality Jan 23 01:09:43.356227 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:09:43.356258 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:09:43.356289 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:09:43.356462 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:09:43.356487 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:09:43.356508 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:09:43.356539 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:09:43.356672 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:09:43.356701 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:09:43.356730 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:09:43.356761 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:09:43.356784 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:09:43.356939 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:09:43.356975 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:09:43.356998 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:09:43.357179 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:09:43.357213 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:09:43.357243 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:09:43.357282 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:09:43.357305 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:09:43.357345 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:09:43.357368 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:09:43.357389 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:09:43.357410 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:09:43.357431 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:09:43.357451 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:09:43.357471 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:09:43.381152 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:09:43.381201 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:09:43.381224 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:09:43.381253 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:09:43.381284 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:09:43.381341 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:09:43.381381 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:09:43.381410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:09:43.381437 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:09:43.381473 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:09:43.381496 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:09:43.381517 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:09:43.381554 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:09:43.381584 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:09:43.381606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:09:43.381634 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:09:43.381662 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:09:43.381689 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:09:43.381723 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:09:43.381747 systemd[1]: Reached target machines.target - Containers. Jan 23 01:09:43.381782 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:09:43.381804 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:09:43.381831 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:09:43.381858 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:09:43.381881 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:09:43.381908 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:09:43.381952 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:09:43.381974 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:09:43.381994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:09:43.382027 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:09:43.390607 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:09:43.390658 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:09:43.390691 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:09:43.390713 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:09:43.390736 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:09:43.390775 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:09:43.390798 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:09:43.390820 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:09:43.390847 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:09:43.390869 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:09:43.399346 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:09:43.399434 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:09:43.399462 systemd[1]: Stopped verity-setup.service. Jan 23 01:09:43.399494 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:09:43.399518 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:09:43.399558 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:09:43.399580 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:09:43.399611 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:09:43.399645 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:09:43.399674 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:09:43.399695 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:09:43.399716 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:09:43.399738 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:09:43.399771 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:09:43.399793 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:09:43.399814 kernel: loop: module loaded Jan 23 01:09:43.399835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:09:43.399858 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:09:43.399878 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:09:43.399898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:09:43.399919 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:09:43.399954 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:09:43.399991 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:09:43.410091 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:09:43.410150 kernel: fuse: init (API version 7.41) Jan 23 01:09:43.410175 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:09:43.410219 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:09:43.410243 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:09:43.410277 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:09:43.410308 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:09:43.410344 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:09:43.410381 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:09:43.410414 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:09:43.410445 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:09:43.410467 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:09:43.410495 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:09:43.410517 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:09:43.410596 systemd-journald[1197]: Collecting audit messages is disabled. Jan 23 01:09:43.410668 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:09:43.410693 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:09:43.410722 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:09:43.410744 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:09:43.410777 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:09:43.410798 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:09:43.410819 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:09:43.410848 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:09:43.410879 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:09:43.410913 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:09:43.410935 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:09:43.410955 kernel: ACPI: bus type drm_connector registered Jan 23 01:09:43.410975 systemd-journald[1197]: Journal started Jan 23 01:09:43.411008 systemd-journald[1197]: Runtime Journal (/run/log/journal/b579f76a320a49eaa0037f3272b40ddd) is 4.7M, max 37.8M, 33.1M free. Jan 23 01:09:42.806002 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:09:42.831394 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 01:09:42.832165 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:09:43.433080 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:09:43.431105 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:09:43.431499 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:09:43.457879 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:09:43.464065 kernel: loop0: detected capacity change from 0 to 219144 Jan 23 01:09:43.494164 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:09:43.501265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:09:43.535231 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:09:43.578794 systemd-journald[1197]: Time spent on flushing to /var/log/journal/b579f76a320a49eaa0037f3272b40ddd is 84.708ms for 1169 entries. Jan 23 01:09:43.578794 systemd-journald[1197]: System Journal (/var/log/journal/b579f76a320a49eaa0037f3272b40ddd) is 8M, max 584.8M, 576.8M free. Jan 23 01:09:43.703703 systemd-journald[1197]: Received client request to flush runtime journal. Jan 23 01:09:43.703827 kernel: loop1: detected capacity change from 0 to 110984 Jan 23 01:09:43.703873 kernel: loop2: detected capacity change from 0 to 8 Jan 23 01:09:43.610880 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:09:43.622231 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:09:43.654178 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:09:43.684578 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 23 01:09:43.684598 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 23 01:09:43.693371 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:09:43.710039 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 01:09:43.710236 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:09:43.776170 kernel: loop4: detected capacity change from 0 to 219144 Jan 23 01:09:43.828049 kernel: loop5: detected capacity change from 0 to 110984 Jan 23 01:09:43.834613 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:09:43.861046 kernel: loop6: detected capacity change from 0 to 8 Jan 23 01:09:43.870036 kernel: loop7: detected capacity change from 0 to 128560 Jan 23 01:09:43.919103 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 23 01:09:43.920713 (sd-merge)[1274]: Merged extensions into '/usr'. Jan 23 01:09:43.940621 systemd[1]: Reload requested from client PID 1228 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:09:43.940670 systemd[1]: Reloading... Jan 23 01:09:44.109055 zram_generator::config[1296]: No configuration found. Jan 23 01:09:44.257139 ldconfig[1221]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:09:44.475775 systemd[1]: Reloading finished in 533 ms. Jan 23 01:09:44.498075 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:09:44.502494 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:09:44.514250 systemd[1]: Starting ensure-sysext.service... Jan 23 01:09:44.521256 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:09:44.536917 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:09:44.542377 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:09:44.552200 systemd[1]: Reload requested from client PID 1356 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:09:44.552230 systemd[1]: Reloading... Jan 23 01:09:44.559453 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:09:44.560379 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:09:44.561112 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:09:44.561759 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:09:44.563526 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:09:44.564244 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Jan 23 01:09:44.564585 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Jan 23 01:09:44.571128 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:09:44.571147 systemd-tmpfiles[1357]: Skipping /boot Jan 23 01:09:44.590487 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:09:44.590674 systemd-tmpfiles[1357]: Skipping /boot Jan 23 01:09:44.647273 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Jan 23 01:09:44.663121 zram_generator::config[1385]: No configuration found. Jan 23 01:09:45.093060 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:09:45.145996 systemd[1]: Reloading finished in 593 ms. Jan 23 01:09:45.153037 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 01:09:45.161760 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:09:45.170047 kernel: ACPI: button: Power Button [PWRF] Jan 23 01:09:45.183783 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:09:45.210038 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:09:45.258138 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 01:09:45.265836 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:09:45.270347 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:09:45.275514 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:09:45.277145 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:09:45.281340 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:09:45.285975 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:09:45.290388 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:09:45.296040 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 01:09:45.301374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:09:45.303076 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 01:09:45.303383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:09:45.306134 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:09:45.307088 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:09:45.313427 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:09:45.321485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:09:45.329275 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:09:45.334206 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:09:45.335885 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:09:45.350147 systemd[1]: Finished ensure-sysext.service. Jan 23 01:09:45.364653 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 01:09:45.376499 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:09:45.405985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:09:45.422724 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:09:45.426395 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:09:45.426750 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:09:45.428901 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:09:45.430148 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:09:45.439454 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:09:45.444284 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:09:45.444519 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:09:45.451230 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:09:45.460961 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:09:45.462867 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:09:45.464887 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:09:45.491413 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:09:45.524156 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:09:45.531217 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:09:45.579314 augenrules[1527]: No rules Jan 23 01:09:45.580857 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:09:45.582100 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:09:45.583678 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:09:45.584807 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:09:45.654682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:09:45.993895 systemd-networkd[1481]: lo: Link UP Jan 23 01:09:45.993921 systemd-networkd[1481]: lo: Gained carrier Jan 23 01:09:45.995081 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:09:46.000152 systemd-networkd[1481]: Enumeration completed Jan 23 01:09:46.000326 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:09:46.001691 systemd-networkd[1481]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:09:46.001699 systemd-networkd[1481]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:09:46.006282 systemd-networkd[1481]: eth0: Link UP Jan 23 01:09:46.006554 systemd-networkd[1481]: eth0: Gained carrier Jan 23 01:09:46.006576 systemd-networkd[1481]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:09:46.008446 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:09:46.010964 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:09:46.024118 systemd-networkd[1481]: eth0: DHCPv4 address 10.230.15.178/30, gateway 10.230.15.177 acquired from 10.230.15.177 Jan 23 01:09:46.037523 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 01:09:46.038620 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:09:46.054120 systemd-resolved[1482]: Positive Trust Anchors: Jan 23 01:09:46.054607 systemd-resolved[1482]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:09:46.054659 systemd-resolved[1482]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:09:46.069945 systemd-resolved[1482]: Using system hostname 'srv-p26ko.gb1.brightbox.com'. Jan 23 01:09:46.075053 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:09:46.076007 systemd[1]: Reached target network.target - Network. Jan 23 01:09:46.076650 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:09:46.077438 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:09:46.078280 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:09:46.079111 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:09:46.079911 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:09:46.081041 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:09:46.081956 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:09:46.082757 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:09:46.083929 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:09:46.083994 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:09:46.084687 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:09:46.088345 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:09:46.091982 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:09:46.099284 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:09:46.101231 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:09:46.102035 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:09:46.110337 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:09:46.111562 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:09:46.114089 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:09:46.115241 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:09:46.117606 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:09:46.118517 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:09:46.119319 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:09:46.119391 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:09:46.121218 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:09:46.125215 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 01:09:46.128357 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:09:46.133180 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:09:46.139842 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:09:46.148145 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:09:46.150127 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:09:46.153359 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:09:46.159081 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:09:46.157210 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:09:46.165188 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 01:09:46.171190 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:09:46.179345 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:09:46.183932 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing passwd entry cache Jan 23 01:09:46.184430 oslogin_cache_refresh[1562]: Refreshing passwd entry cache Jan 23 01:09:46.192400 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:09:46.195900 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:09:46.196903 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:09:46.206323 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:09:46.209590 jq[1560]: false Jan 23 01:09:46.211048 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting users, quitting Jan 23 01:09:46.211048 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:09:46.211048 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing group entry cache Jan 23 01:09:46.210491 oslogin_cache_refresh[1562]: Failure getting users, quitting Jan 23 01:09:46.210534 oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:09:46.210633 oslogin_cache_refresh[1562]: Refreshing group entry cache Jan 23 01:09:46.214239 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:09:46.215221 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting groups, quitting Jan 23 01:09:46.215221 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:09:46.215209 oslogin_cache_refresh[1562]: Failure getting groups, quitting Jan 23 01:09:46.215225 oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:09:46.221179 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:09:46.223636 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:09:46.228720 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:09:46.229468 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:09:46.230842 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:09:46.725837 systemd-timesyncd[1486]: Contacted time server 87.106.36.214:123 (0.flatcar.pool.ntp.org). Jan 23 01:09:46.728006 extend-filesystems[1561]: Found /dev/vda6 Jan 23 01:09:46.725989 systemd-timesyncd[1486]: Initial clock synchronization to Fri 2026-01-23 01:09:46.725693 UTC. Jan 23 01:09:46.727497 systemd-resolved[1482]: Clock change detected. Flushing caches. Jan 23 01:09:46.744113 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:09:46.744523 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:09:46.747873 extend-filesystems[1561]: Found /dev/vda9 Jan 23 01:09:46.762231 extend-filesystems[1561]: Checking size of /dev/vda9 Jan 23 01:09:46.772516 (ntainerd)[1589]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:09:46.796183 jq[1574]: true Jan 23 01:09:46.796517 update_engine[1573]: I20260123 01:09:46.788516 1573 main.cc:92] Flatcar Update Engine starting Jan 23 01:09:46.806704 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:09:46.807256 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:09:46.817254 tar[1576]: linux-amd64/LICENSE Jan 23 01:09:46.817254 tar[1576]: linux-amd64/helm Jan 23 01:09:46.853179 jq[1597]: true Jan 23 01:09:46.858457 extend-filesystems[1561]: Resized partition /dev/vda9 Jan 23 01:09:46.862251 dbus-daemon[1558]: [system] SELinux support is enabled Jan 23 01:09:46.865142 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:09:46.873116 extend-filesystems[1604]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 01:09:46.870185 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:09:46.870269 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:09:46.871546 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:09:46.871575 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:09:46.876811 dbus-daemon[1558]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1481 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 01:09:46.879267 dbus-daemon[1558]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 01:09:46.888457 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 23 01:09:46.889926 update_engine[1573]: I20260123 01:09:46.889698 1573 update_check_scheduler.cc:74] Next update check in 9m18s Jan 23 01:09:46.890304 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 01:09:46.891342 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:09:46.977949 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:09:47.013150 systemd-logind[1569]: Watching system buttons on /dev/input/event3 (Power Button) Jan 23 01:09:47.013198 systemd-logind[1569]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:09:47.015944 systemd-logind[1569]: New seat seat0. Jan 23 01:09:47.021055 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:09:47.134579 bash[1623]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:09:47.137056 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:09:47.154050 systemd[1]: Starting sshkeys.service... Jan 23 01:09:47.300716 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 01:09:47.320733 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 01:09:47.358997 containerd[1589]: time="2026-01-23T01:09:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:09:47.363670 containerd[1589]: time="2026-01-23T01:09:47.359788749Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:09:47.378443 containerd[1589]: time="2026-01-23T01:09:47.373729234Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="29.13µs" Jan 23 01:09:47.378443 containerd[1589]: time="2026-01-23T01:09:47.373790710Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:09:47.378443 containerd[1589]: time="2026-01-23T01:09:47.373829742Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:09:47.378443 containerd[1589]: time="2026-01-23T01:09:47.374187136Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:09:47.378443 containerd[1589]: time="2026-01-23T01:09:47.374215938Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:09:47.378443 containerd[1589]: time="2026-01-23T01:09:47.374269762Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:09:47.378443 containerd[1589]: time="2026-01-23T01:09:47.374376341Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:09:47.378443 containerd[1589]: time="2026-01-23T01:09:47.374444045Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:09:47.378443 containerd[1589]: time="2026-01-23T01:09:47.374782319Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:09:47.378443 containerd[1589]: time="2026-01-23T01:09:47.374808515Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:09:47.378443 containerd[1589]: time="2026-01-23T01:09:47.374827832Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:09:47.378443 containerd[1589]: time="2026-01-23T01:09:47.374843009Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:09:47.384684 containerd[1589]: time="2026-01-23T01:09:47.374981260Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:09:47.384684 containerd[1589]: time="2026-01-23T01:09:47.375412360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:09:47.384684 containerd[1589]: time="2026-01-23T01:09:47.375470155Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:09:47.384684 containerd[1589]: time="2026-01-23T01:09:47.375492925Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:09:47.384684 containerd[1589]: time="2026-01-23T01:09:47.375554880Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:09:47.384684 containerd[1589]: time="2026-01-23T01:09:47.376109668Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:09:47.384684 containerd[1589]: time="2026-01-23T01:09:47.376207394Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:09:47.388410 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:09:47.405776 containerd[1589]: time="2026-01-23T01:09:47.405704195Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:09:47.405919 containerd[1589]: time="2026-01-23T01:09:47.405840516Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:09:47.405919 containerd[1589]: time="2026-01-23T01:09:47.405877880Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:09:47.405919 containerd[1589]: time="2026-01-23T01:09:47.405901798Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:09:47.406127 containerd[1589]: time="2026-01-23T01:09:47.405931116Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:09:47.406127 containerd[1589]: time="2026-01-23T01:09:47.405969665Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:09:47.406127 containerd[1589]: time="2026-01-23T01:09:47.405995493Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:09:47.406127 containerd[1589]: time="2026-01-23T01:09:47.406049666Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:09:47.406127 containerd[1589]: time="2026-01-23T01:09:47.406072732Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:09:47.406127 containerd[1589]: time="2026-01-23T01:09:47.406106130Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:09:47.406347 containerd[1589]: time="2026-01-23T01:09:47.406127971Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:09:47.406347 containerd[1589]: time="2026-01-23T01:09:47.406152471Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:09:47.407474 containerd[1589]: time="2026-01-23T01:09:47.406480763Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:09:47.407474 containerd[1589]: time="2026-01-23T01:09:47.406529187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:09:47.407474 containerd[1589]: time="2026-01-23T01:09:47.406565444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:09:47.407474 containerd[1589]: time="2026-01-23T01:09:47.406600264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:09:47.407474 containerd[1589]: time="2026-01-23T01:09:47.406637295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:09:47.407474 containerd[1589]: time="2026-01-23T01:09:47.406660078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:09:47.407474 containerd[1589]: time="2026-01-23T01:09:47.406679687Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:09:47.407474 containerd[1589]: time="2026-01-23T01:09:47.406697679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:09:47.407474 containerd[1589]: time="2026-01-23T01:09:47.406716831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:09:47.407474 containerd[1589]: time="2026-01-23T01:09:47.406735319Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:09:47.407474 containerd[1589]: time="2026-01-23T01:09:47.406753342Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:09:47.407474 containerd[1589]: time="2026-01-23T01:09:47.406870339Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:09:47.407474 containerd[1589]: time="2026-01-23T01:09:47.406898170Z" level=info msg="Start snapshots syncer" Jan 23 01:09:47.407474 containerd[1589]: time="2026-01-23T01:09:47.406938579Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:09:47.411229 containerd[1589]: time="2026-01-23T01:09:47.407345466Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:09:47.411229 containerd[1589]: time="2026-01-23T01:09:47.411063744Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:09:47.411781 containerd[1589]: time="2026-01-23T01:09:47.411259888Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:09:47.413409 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 23 01:09:47.424019 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 01:09:47.432406 containerd[1589]: time="2026-01-23T01:09:47.432333527Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:09:47.432481 containerd[1589]: time="2026-01-23T01:09:47.432436754Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:09:47.432481 containerd[1589]: time="2026-01-23T01:09:47.432464579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:09:47.432546 containerd[1589]: time="2026-01-23T01:09:47.432486132Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:09:47.432546 containerd[1589]: time="2026-01-23T01:09:47.432526059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:09:47.432658 containerd[1589]: time="2026-01-23T01:09:47.432575851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:09:47.432658 containerd[1589]: time="2026-01-23T01:09:47.432621892Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:09:47.432735 containerd[1589]: time="2026-01-23T01:09:47.432674843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:09:47.432735 containerd[1589]: time="2026-01-23T01:09:47.432697728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:09:47.432735 containerd[1589]: time="2026-01-23T01:09:47.432717151Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:09:47.433022 containerd[1589]: time="2026-01-23T01:09:47.432797391Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:09:47.433022 containerd[1589]: time="2026-01-23T01:09:47.432857307Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:09:47.433022 containerd[1589]: time="2026-01-23T01:09:47.432877405Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:09:47.433022 containerd[1589]: time="2026-01-23T01:09:47.432894747Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:09:47.433022 containerd[1589]: time="2026-01-23T01:09:47.432913619Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:09:47.433022 containerd[1589]: time="2026-01-23T01:09:47.432952219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:09:47.433022 containerd[1589]: time="2026-01-23T01:09:47.432998508Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:09:47.433330 containerd[1589]: time="2026-01-23T01:09:47.433059766Z" level=info msg="runtime interface created" Jan 23 01:09:47.433330 containerd[1589]: time="2026-01-23T01:09:47.433073288Z" level=info msg="created NRI interface" Jan 23 01:09:47.433330 containerd[1589]: time="2026-01-23T01:09:47.433097064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:09:47.433330 containerd[1589]: time="2026-01-23T01:09:47.433142302Z" level=info msg="Connect containerd service" Jan 23 01:09:47.433330 containerd[1589]: time="2026-01-23T01:09:47.433176224Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:09:47.435411 containerd[1589]: time="2026-01-23T01:09:47.435201700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:09:47.463790 extend-filesystems[1604]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 01:09:47.463790 extend-filesystems[1604]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 23 01:09:47.463790 extend-filesystems[1604]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 23 01:09:47.463299 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:09:47.444316 dbus-daemon[1558]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 01:09:47.474691 extend-filesystems[1561]: Resized filesystem in /dev/vda9 Jan 23 01:09:47.465823 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:09:47.480683 dbus-daemon[1558]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1607 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 01:09:47.492677 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 01:09:47.494005 locksmithd[1608]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:09:47.658652 containerd[1589]: time="2026-01-23T01:09:47.658455344Z" level=info msg="Start subscribing containerd event" Jan 23 01:09:47.658918 containerd[1589]: time="2026-01-23T01:09:47.658856962Z" level=info msg="Start recovering state" Jan 23 01:09:47.660776 containerd[1589]: time="2026-01-23T01:09:47.659144442Z" level=info msg="Start event monitor" Jan 23 01:09:47.660776 containerd[1589]: time="2026-01-23T01:09:47.659175820Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:09:47.660776 containerd[1589]: time="2026-01-23T01:09:47.659194933Z" level=info msg="Start streaming server" Jan 23 01:09:47.660776 containerd[1589]: time="2026-01-23T01:09:47.659220058Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:09:47.660776 containerd[1589]: time="2026-01-23T01:09:47.659239761Z" level=info msg="runtime interface starting up..." Jan 23 01:09:47.660776 containerd[1589]: time="2026-01-23T01:09:47.659254583Z" level=info msg="starting plugins..." Jan 23 01:09:47.660776 containerd[1589]: time="2026-01-23T01:09:47.659290728Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:09:47.661165 containerd[1589]: time="2026-01-23T01:09:47.661136211Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:09:47.661360 containerd[1589]: time="2026-01-23T01:09:47.661336170Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:09:47.662595 containerd[1589]: time="2026-01-23T01:09:47.662534522Z" level=info msg="containerd successfully booted in 0.304773s" Jan 23 01:09:47.663165 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:09:47.729434 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:09:47.736251 polkitd[1648]: Started polkitd version 126 Jan 23 01:09:47.753097 polkitd[1648]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 01:09:47.757606 polkitd[1648]: Loading rules from directory /run/polkit-1/rules.d Jan 23 01:09:47.758478 polkitd[1648]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:09:47.758855 polkitd[1648]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 01:09:47.758908 polkitd[1648]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:09:47.758980 polkitd[1648]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 01:09:47.761166 polkitd[1648]: Finished loading, compiling and executing 2 rules Jan 23 01:09:47.762663 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 01:09:47.763111 sshd_keygen[1601]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:09:47.765827 dbus-daemon[1558]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 01:09:47.767493 polkitd[1648]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 01:09:47.805774 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:09:47.813888 systemd-hostnamed[1607]: Hostname set to (static) Jan 23 01:09:47.814680 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:09:47.838646 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:09:47.839044 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:09:47.844945 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:09:47.869333 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:09:47.875871 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:09:47.882539 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:09:47.884851 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:09:47.986216 tar[1576]: linux-amd64/README.md Jan 23 01:09:48.008522 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 01:09:48.346760 systemd-networkd[1481]: eth0: Gained IPv6LL Jan 23 01:09:48.351967 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:09:48.354143 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:09:48.359468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:09:48.363762 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:09:48.402147 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:09:48.412430 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:09:49.281628 systemd-networkd[1481]: eth0: Ignoring DHCPv6 address 2a02:1348:179:83ec:24:19ff:fee6:fb2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:83ec:24:19ff:fee6:fb2/64 assigned by NDisc. Jan 23 01:09:49.281642 systemd-networkd[1481]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 23 01:09:49.468880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:09:49.480279 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:09:49.762481 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:09:50.114979 kubelet[1703]: E0123 01:09:50.114845 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:09:50.117870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:09:50.118158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:09:50.119248 systemd[1]: kubelet.service: Consumed 1.090s CPU time, 257.2M memory peak. Jan 23 01:09:50.425447 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:09:51.458613 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:09:51.461441 systemd[1]: Started sshd@0-10.230.15.178:22-20.161.92.111:42162.service - OpenSSH per-connection server daemon (20.161.92.111:42162). Jan 23 01:09:52.075418 sshd[1713]: Accepted publickey for core from 20.161.92.111 port 42162 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:09:52.077824 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:52.089680 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:09:52.096940 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:09:52.113676 systemd-logind[1569]: New session 1 of user core. Jan 23 01:09:52.139751 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:09:52.147118 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:09:52.165065 (systemd)[1718]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:09:52.170918 systemd-logind[1569]: New session c1 of user core. Jan 23 01:09:52.477906 systemd[1718]: Queued start job for default target default.target. Jan 23 01:09:52.496741 systemd[1718]: Created slice app.slice - User Application Slice. Jan 23 01:09:52.497049 systemd[1718]: Reached target paths.target - Paths. Jan 23 01:09:52.497255 systemd[1718]: Reached target timers.target - Timers. Jan 23 01:09:52.499938 systemd[1718]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:09:52.524994 systemd[1718]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:09:52.525221 systemd[1718]: Reached target sockets.target - Sockets. Jan 23 01:09:52.525309 systemd[1718]: Reached target basic.target - Basic System. Jan 23 01:09:52.525395 systemd[1718]: Reached target default.target - Main User Target. Jan 23 01:09:52.525548 systemd[1718]: Startup finished in 342ms. Jan 23 01:09:52.525652 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:09:52.536956 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:09:52.942814 login[1680]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 01:09:52.953462 systemd-logind[1569]: New session 2 of user core. Jan 23 01:09:52.980284 login[1679]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 01:09:52.980710 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:09:52.985812 systemd[1]: Started sshd@1-10.230.15.178:22-20.161.92.111:48854.service - OpenSSH per-connection server daemon (20.161.92.111:48854). Jan 23 01:09:53.000565 systemd-logind[1569]: New session 3 of user core. Jan 23 01:09:53.004691 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:09:53.601195 sshd[1733]: Accepted publickey for core from 20.161.92.111 port 48854 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:09:53.604308 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:53.613875 systemd-logind[1569]: New session 4 of user core. Jan 23 01:09:53.619704 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:09:53.779452 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:09:53.797999 coreos-metadata[1557]: Jan 23 01:09:53.797 WARN failed to locate config-drive, using the metadata service API instead Jan 23 01:09:53.957226 coreos-metadata[1557]: Jan 23 01:09:53.957 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 23 01:09:53.965046 coreos-metadata[1557]: Jan 23 01:09:53.965 INFO Fetch failed with 404: resource not found Jan 23 01:09:53.965238 coreos-metadata[1557]: Jan 23 01:09:53.965 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 01:09:53.965899 coreos-metadata[1557]: Jan 23 01:09:53.965 INFO Fetch successful Jan 23 01:09:53.966049 coreos-metadata[1557]: Jan 23 01:09:53.966 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 23 01:09:53.978267 coreos-metadata[1557]: Jan 23 01:09:53.978 INFO Fetch successful Jan 23 01:09:53.978570 coreos-metadata[1557]: Jan 23 01:09:53.978 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 23 01:09:54.000567 coreos-metadata[1557]: Jan 23 01:09:54.000 INFO Fetch successful Jan 23 01:09:54.000716 coreos-metadata[1557]: Jan 23 01:09:54.000 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 23 01:09:54.007588 sshd[1757]: Connection closed by 20.161.92.111 port 48854 Jan 23 01:09:54.008743 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:54.015013 systemd[1]: sshd@1-10.230.15.178:22-20.161.92.111:48854.service: Deactivated successfully. Jan 23 01:09:54.017037 coreos-metadata[1557]: Jan 23 01:09:54.017 INFO Fetch successful Jan 23 01:09:54.017215 coreos-metadata[1557]: Jan 23 01:09:54.017 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 23 01:09:54.017885 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:09:54.019890 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:09:54.021840 systemd-logind[1569]: Removed session 4. Jan 23 01:09:54.039090 coreos-metadata[1557]: Jan 23 01:09:54.039 INFO Fetch successful Jan 23 01:09:54.083430 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 01:09:54.085270 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:09:54.112083 systemd[1]: Started sshd@2-10.230.15.178:22-20.161.92.111:48866.service - OpenSSH per-connection server daemon (20.161.92.111:48866). Jan 23 01:09:54.442459 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:09:54.459006 coreos-metadata[1635]: Jan 23 01:09:54.458 WARN failed to locate config-drive, using the metadata service API instead Jan 23 01:09:54.481191 coreos-metadata[1635]: Jan 23 01:09:54.481 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 23 01:09:54.502715 coreos-metadata[1635]: Jan 23 01:09:54.502 INFO Fetch successful Jan 23 01:09:54.503053 coreos-metadata[1635]: Jan 23 01:09:54.502 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 01:09:54.543918 coreos-metadata[1635]: Jan 23 01:09:54.543 INFO Fetch successful Jan 23 01:09:54.546571 unknown[1635]: wrote ssh authorized keys file for user: core Jan 23 01:09:54.571884 update-ssh-keys[1776]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:09:54.574160 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 01:09:54.577077 systemd[1]: Finished sshkeys.service. Jan 23 01:09:54.580643 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:09:54.582516 systemd[1]: Startup finished in 3.637s (kernel) + 16.258s (initrd) + 12.232s (userspace) = 32.129s. Jan 23 01:09:54.698797 sshd[1770]: Accepted publickey for core from 20.161.92.111 port 48866 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:09:54.701578 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:54.709184 systemd-logind[1569]: New session 5 of user core. Jan 23 01:09:54.718630 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:09:55.102553 sshd[1779]: Connection closed by 20.161.92.111 port 48866 Jan 23 01:09:55.103712 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:55.110044 systemd[1]: sshd@2-10.230.15.178:22-20.161.92.111:48866.service: Deactivated successfully. Jan 23 01:09:55.113140 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:09:55.115616 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:09:55.117272 systemd-logind[1569]: Removed session 5. Jan 23 01:10:00.274798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 01:10:00.278703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:00.501459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:00.516914 (kubelet)[1792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:10:00.586980 kubelet[1792]: E0123 01:10:00.586768 1792 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:10:00.591768 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:10:00.592072 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:10:00.593239 systemd[1]: kubelet.service: Consumed 259ms CPU time, 110.3M memory peak. Jan 23 01:10:05.213569 systemd[1]: Started sshd@3-10.230.15.178:22-20.161.92.111:36688.service - OpenSSH per-connection server daemon (20.161.92.111:36688). Jan 23 01:10:05.798043 sshd[1799]: Accepted publickey for core from 20.161.92.111 port 36688 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:10:05.798864 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:05.807670 systemd-logind[1569]: New session 6 of user core. Jan 23 01:10:05.813709 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:10:06.201828 sshd[1802]: Connection closed by 20.161.92.111 port 36688 Jan 23 01:10:06.204691 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:06.209548 systemd[1]: sshd@3-10.230.15.178:22-20.161.92.111:36688.service: Deactivated successfully. Jan 23 01:10:06.211991 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:10:06.214808 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:10:06.216867 systemd-logind[1569]: Removed session 6. Jan 23 01:10:06.308567 systemd[1]: Started sshd@4-10.230.15.178:22-20.161.92.111:36692.service - OpenSSH per-connection server daemon (20.161.92.111:36692). Jan 23 01:10:06.890553 sshd[1808]: Accepted publickey for core from 20.161.92.111 port 36692 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:10:06.892587 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:06.900904 systemd-logind[1569]: New session 7 of user core. Jan 23 01:10:06.910785 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:10:07.289148 sshd[1811]: Connection closed by 20.161.92.111 port 36692 Jan 23 01:10:07.290213 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:07.296801 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:10:07.297316 systemd[1]: sshd@4-10.230.15.178:22-20.161.92.111:36692.service: Deactivated successfully. Jan 23 01:10:07.299752 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:10:07.302374 systemd-logind[1569]: Removed session 7. Jan 23 01:10:07.395141 systemd[1]: Started sshd@5-10.230.15.178:22-20.161.92.111:36702.service - OpenSSH per-connection server daemon (20.161.92.111:36702). Jan 23 01:10:07.985623 sshd[1817]: Accepted publickey for core from 20.161.92.111 port 36702 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:10:07.987400 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:07.995124 systemd-logind[1569]: New session 8 of user core. Jan 23 01:10:08.006793 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:10:08.398539 sshd[1820]: Connection closed by 20.161.92.111 port 36702 Jan 23 01:10:08.399820 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:08.405628 systemd[1]: sshd@5-10.230.15.178:22-20.161.92.111:36702.service: Deactivated successfully. Jan 23 01:10:08.408679 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:10:08.410241 systemd-logind[1569]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:10:08.412581 systemd-logind[1569]: Removed session 8. Jan 23 01:10:08.499889 systemd[1]: Started sshd@6-10.230.15.178:22-20.161.92.111:36704.service - OpenSSH per-connection server daemon (20.161.92.111:36704). Jan 23 01:10:09.078644 sshd[1826]: Accepted publickey for core from 20.161.92.111 port 36704 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:10:09.081052 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:09.087830 systemd-logind[1569]: New session 9 of user core. Jan 23 01:10:09.098754 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:10:09.406545 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:10:09.407005 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:09.421091 sudo[1830]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:09.510817 sshd[1829]: Connection closed by 20.161.92.111 port 36704 Jan 23 01:10:09.512686 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:09.518908 systemd[1]: sshd@6-10.230.15.178:22-20.161.92.111:36704.service: Deactivated successfully. Jan 23 01:10:09.521649 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:10:09.523598 systemd-logind[1569]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:10:09.525429 systemd-logind[1569]: Removed session 9. Jan 23 01:10:09.616728 systemd[1]: Started sshd@7-10.230.15.178:22-20.161.92.111:36706.service - OpenSSH per-connection server daemon (20.161.92.111:36706). Jan 23 01:10:10.199667 sshd[1836]: Accepted publickey for core from 20.161.92.111 port 36706 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:10:10.201331 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:10.208158 systemd-logind[1569]: New session 10 of user core. Jan 23 01:10:10.220738 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:10:10.514377 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:10:10.514845 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:10.521735 sudo[1841]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:10.529827 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:10:10.530258 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:10.544450 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:10:10.591276 augenrules[1863]: No rules Jan 23 01:10:10.592783 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:10:10.593218 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:10:10.594758 sudo[1840]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:10.596922 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 01:10:10.600216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:10.685419 sshd[1839]: Connection closed by 20.161.92.111 port 36706 Jan 23 01:10:10.686155 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:10.692768 systemd[1]: sshd@7-10.230.15.178:22-20.161.92.111:36706.service: Deactivated successfully. Jan 23 01:10:10.696065 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:10:10.698740 systemd-logind[1569]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:10:10.700355 systemd-logind[1569]: Removed session 10. Jan 23 01:10:10.775121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:10.784251 (kubelet)[1879]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:10:10.792734 systemd[1]: Started sshd@8-10.230.15.178:22-20.161.92.111:36708.service - OpenSSH per-connection server daemon (20.161.92.111:36708). Jan 23 01:10:10.873214 kubelet[1879]: E0123 01:10:10.873147 1879 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:10:10.877051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:10:10.877329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:10:10.878412 systemd[1]: kubelet.service: Consumed 202ms CPU time, 110.4M memory peak. Jan 23 01:10:11.411840 sshd[1885]: Accepted publickey for core from 20.161.92.111 port 36708 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:10:11.413721 sshd-session[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:11.420218 systemd-logind[1569]: New session 11 of user core. Jan 23 01:10:11.434771 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:10:11.738266 sudo[1891]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:10:11.739325 sudo[1891]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:12.209361 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 01:10:12.232091 (dockerd)[1910]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 01:10:12.603163 dockerd[1910]: time="2026-01-23T01:10:12.603039876Z" level=info msg="Starting up" Jan 23 01:10:12.604454 dockerd[1910]: time="2026-01-23T01:10:12.604416655Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 01:10:12.623287 dockerd[1910]: time="2026-01-23T01:10:12.623215874Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 01:10:12.679993 dockerd[1910]: time="2026-01-23T01:10:12.679859828Z" level=info msg="Loading containers: start." Jan 23 01:10:12.695949 kernel: Initializing XFRM netlink socket Jan 23 01:10:13.046365 systemd-networkd[1481]: docker0: Link UP Jan 23 01:10:13.057799 dockerd[1910]: time="2026-01-23T01:10:13.057703879Z" level=info msg="Loading containers: done." Jan 23 01:10:13.080181 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3537936273-merged.mount: Deactivated successfully. Jan 23 01:10:13.083611 dockerd[1910]: time="2026-01-23T01:10:13.083541351Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 01:10:13.083735 dockerd[1910]: time="2026-01-23T01:10:13.083673348Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 01:10:13.083880 dockerd[1910]: time="2026-01-23T01:10:13.083841782Z" level=info msg="Initializing buildkit" Jan 23 01:10:13.111191 dockerd[1910]: time="2026-01-23T01:10:13.111102285Z" level=info msg="Completed buildkit initialization" Jan 23 01:10:13.122724 dockerd[1910]: time="2026-01-23T01:10:13.122638404Z" level=info msg="Daemon has completed initialization" Jan 23 01:10:13.122935 dockerd[1910]: time="2026-01-23T01:10:13.122865587Z" level=info msg="API listen on /run/docker.sock" Jan 23 01:10:13.123671 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 01:10:14.267510 containerd[1589]: time="2026-01-23T01:10:14.267322518Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 01:10:14.947955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount46129093.mount: Deactivated successfully. Jan 23 01:10:17.069700 containerd[1589]: time="2026-01-23T01:10:17.069576346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:17.071623 containerd[1589]: time="2026-01-23T01:10:17.071154227Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068081" Jan 23 01:10:17.073830 containerd[1589]: time="2026-01-23T01:10:17.072582977Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:17.091472 containerd[1589]: time="2026-01-23T01:10:17.091428288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:17.093201 containerd[1589]: time="2026-01-23T01:10:17.093161969Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.825701789s" Jan 23 01:10:17.093371 containerd[1589]: time="2026-01-23T01:10:17.093329983Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 23 01:10:17.094478 containerd[1589]: time="2026-01-23T01:10:17.094448774Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 01:10:19.041085 containerd[1589]: time="2026-01-23T01:10:19.039783568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:19.042783 containerd[1589]: time="2026-01-23T01:10:19.042704809Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162448" Jan 23 01:10:19.059683 containerd[1589]: time="2026-01-23T01:10:19.059457212Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:19.064403 containerd[1589]: time="2026-01-23T01:10:19.063483097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:19.064403 containerd[1589]: time="2026-01-23T01:10:19.064282847Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.969704464s" Jan 23 01:10:19.064403 containerd[1589]: time="2026-01-23T01:10:19.064322703Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 23 01:10:19.065600 containerd[1589]: time="2026-01-23T01:10:19.065552327Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 01:10:19.295969 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 01:10:20.684446 containerd[1589]: time="2026-01-23T01:10:20.684300391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:20.685706 containerd[1589]: time="2026-01-23T01:10:20.685585439Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725935" Jan 23 01:10:20.686562 containerd[1589]: time="2026-01-23T01:10:20.686523980Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:20.690357 containerd[1589]: time="2026-01-23T01:10:20.690323695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:20.692120 containerd[1589]: time="2026-01-23T01:10:20.691949630Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.62619358s" Jan 23 01:10:20.692120 containerd[1589]: time="2026-01-23T01:10:20.691992699Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 23 01:10:20.692930 containerd[1589]: time="2026-01-23T01:10:20.692707503Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 01:10:21.025069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 01:10:21.028502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:21.239489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:21.251880 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:10:21.346420 kubelet[2202]: E0123 01:10:21.344849 2202 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:10:21.348578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:10:21.348853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:10:21.349749 systemd[1]: kubelet.service: Consumed 244ms CPU time, 110.2M memory peak. Jan 23 01:10:22.348341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4047703800.mount: Deactivated successfully. Jan 23 01:10:22.848717 containerd[1589]: time="2026-01-23T01:10:22.848644629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:22.850262 containerd[1589]: time="2026-01-23T01:10:22.850188716Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965301" Jan 23 01:10:22.852337 containerd[1589]: time="2026-01-23T01:10:22.851446115Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:22.853711 containerd[1589]: time="2026-01-23T01:10:22.853647305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:22.855087 containerd[1589]: time="2026-01-23T01:10:22.854460454Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 2.161436869s" Jan 23 01:10:22.855087 containerd[1589]: time="2026-01-23T01:10:22.854511077Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 23 01:10:22.855256 containerd[1589]: time="2026-01-23T01:10:22.855228651Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 01:10:23.470537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount706668731.mount: Deactivated successfully. Jan 23 01:10:24.976842 containerd[1589]: time="2026-01-23T01:10:24.976750016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:24.978229 containerd[1589]: time="2026-01-23T01:10:24.978194320Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Jan 23 01:10:24.995056 containerd[1589]: time="2026-01-23T01:10:24.994971043Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:24.998776 containerd[1589]: time="2026-01-23T01:10:24.998729015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:25.000275 containerd[1589]: time="2026-01-23T01:10:25.000198709Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.144934386s" Jan 23 01:10:25.000352 containerd[1589]: time="2026-01-23T01:10:25.000280703Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 23 01:10:25.001569 containerd[1589]: time="2026-01-23T01:10:25.001216214Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 01:10:25.508374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421997566.mount: Deactivated successfully. Jan 23 01:10:25.514884 containerd[1589]: time="2026-01-23T01:10:25.513830133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:25.514884 containerd[1589]: time="2026-01-23T01:10:25.514847649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Jan 23 01:10:25.515628 containerd[1589]: time="2026-01-23T01:10:25.515595939Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:25.518189 containerd[1589]: time="2026-01-23T01:10:25.518149170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:25.519316 containerd[1589]: time="2026-01-23T01:10:25.519282686Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 518.024515ms" Jan 23 01:10:25.519462 containerd[1589]: time="2026-01-23T01:10:25.519435438Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 23 01:10:25.520646 containerd[1589]: time="2026-01-23T01:10:25.520601227Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 01:10:26.111813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422031283.mount: Deactivated successfully. Jan 23 01:10:30.771347 containerd[1589]: time="2026-01-23T01:10:30.771169439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:30.773888 containerd[1589]: time="2026-01-23T01:10:30.773825243Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166822" Jan 23 01:10:30.908231 containerd[1589]: time="2026-01-23T01:10:30.908113836Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:30.915886 containerd[1589]: time="2026-01-23T01:10:30.915822068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:30.918230 containerd[1589]: time="2026-01-23T01:10:30.917538112Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 5.396706315s" Jan 23 01:10:30.918230 containerd[1589]: time="2026-01-23T01:10:30.917614428Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 23 01:10:31.524968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 01:10:31.528909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:31.835446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:31.850957 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:10:31.921597 kubelet[2353]: E0123 01:10:31.921434 2353 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:10:31.930437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:10:31.930677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:10:31.931296 systemd[1]: kubelet.service: Consumed 245ms CPU time, 110.6M memory peak. Jan 23 01:10:32.467479 update_engine[1573]: I20260123 01:10:32.466617 1573 update_attempter.cc:509] Updating boot flags... Jan 23 01:10:36.586267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:36.587152 systemd[1]: kubelet.service: Consumed 245ms CPU time, 110.6M memory peak. Jan 23 01:10:36.590052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:36.627965 systemd[1]: Reload requested from client PID 2386 ('systemctl') (unit session-11.scope)... Jan 23 01:10:36.628016 systemd[1]: Reloading... Jan 23 01:10:36.847421 zram_generator::config[2437]: No configuration found. Jan 23 01:10:37.125889 systemd[1]: Reloading finished in 497 ms. Jan 23 01:10:37.205274 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:10:37.205562 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:10:37.206202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:37.206279 systemd[1]: kubelet.service: Consumed 145ms CPU time, 98.2M memory peak. Jan 23 01:10:37.209011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:37.391922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:37.401084 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:10:37.502307 kubelet[2499]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:10:37.502307 kubelet[2499]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:10:37.504570 kubelet[2499]: I0123 01:10:37.504165 2499 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:10:38.769013 kubelet[2499]: I0123 01:10:38.768919 2499 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 01:10:38.769013 kubelet[2499]: I0123 01:10:38.768961 2499 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:10:38.769013 kubelet[2499]: I0123 01:10:38.769014 2499 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 01:10:38.769677 kubelet[2499]: I0123 01:10:38.769035 2499 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:10:38.769677 kubelet[2499]: I0123 01:10:38.769309 2499 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:10:38.783856 kubelet[2499]: I0123 01:10:38.783805 2499 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:10:38.786933 kubelet[2499]: E0123 01:10:38.786848 2499 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.15.178:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.15.178:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:10:38.798970 kubelet[2499]: I0123 01:10:38.798940 2499 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:10:38.807584 kubelet[2499]: I0123 01:10:38.807551 2499 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 01:10:38.809464 kubelet[2499]: I0123 01:10:38.809411 2499 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:10:38.811007 kubelet[2499]: I0123 01:10:38.809451 2499 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-p26ko.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:10:38.811314 kubelet[2499]: I0123 01:10:38.811021 2499 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:10:38.811314 kubelet[2499]: I0123 01:10:38.811058 2499 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 01:10:38.811314 kubelet[2499]: I0123 01:10:38.811233 2499 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 01:10:38.815041 kubelet[2499]: I0123 01:10:38.814996 2499 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:10:38.815431 kubelet[2499]: I0123 01:10:38.815407 2499 kubelet.go:475] "Attempting to sync node with API server" Jan 23 01:10:38.815518 kubelet[2499]: I0123 01:10:38.815439 2499 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:10:38.816265 kubelet[2499]: E0123 01:10:38.816206 2499 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.15.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-p26ko.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.15.178:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:10:38.816844 kubelet[2499]: I0123 01:10:38.816813 2499 kubelet.go:387] "Adding apiserver pod source" Jan 23 01:10:38.816938 kubelet[2499]: I0123 01:10:38.816866 2499 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:10:38.823944 kubelet[2499]: E0123 01:10:38.823435 2499 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.15.178:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.15.178:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:10:38.823944 kubelet[2499]: I0123 01:10:38.823907 2499 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:10:38.827497 kubelet[2499]: I0123 01:10:38.827468 2499 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:10:38.827600 kubelet[2499]: I0123 01:10:38.827517 2499 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 01:10:38.829513 kubelet[2499]: W0123 01:10:38.829484 2499 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:10:38.836729 kubelet[2499]: I0123 01:10:38.836692 2499 server.go:1262] "Started kubelet" Jan 23 01:10:38.838240 kubelet[2499]: I0123 01:10:38.838084 2499 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:10:38.842641 kubelet[2499]: E0123 01:10:38.841005 2499 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.15.178:6443/api/v1/namespaces/default/events\": dial tcp 10.230.15.178:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-p26ko.gb1.brightbox.com.188d36f1591e33c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-p26ko.gb1.brightbox.com,UID:srv-p26ko.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-p26ko.gb1.brightbox.com,},FirstTimestamp:2026-01-23 01:10:38.836642759 +0000 UTC m=+1.430926918,LastTimestamp:2026-01-23 01:10:38.836642759 +0000 UTC m=+1.430926918,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-p26ko.gb1.brightbox.com,}" Jan 23 01:10:38.844925 kubelet[2499]: I0123 01:10:38.844880 2499 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:10:38.849932 kubelet[2499]: I0123 01:10:38.849849 2499 server.go:310] "Adding debug handlers to kubelet server" Jan 23 01:10:38.855067 kubelet[2499]: I0123 01:10:38.855044 2499 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 01:10:38.855657 kubelet[2499]: E0123 01:10:38.855595 2499 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-p26ko.gb1.brightbox.com\" not found" Jan 23 01:10:38.857983 kubelet[2499]: I0123 01:10:38.857254 2499 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 01:10:38.858059 kubelet[2499]: I0123 01:10:38.857333 2499 reconciler.go:29] "Reconciler: start to sync state" Jan 23 01:10:38.858516 kubelet[2499]: I0123 01:10:38.858436 2499 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:10:38.858716 kubelet[2499]: E0123 01:10:38.858644 2499 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.15.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.15.178:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:10:38.858716 kubelet[2499]: I0123 01:10:38.858662 2499 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 01:10:38.858935 kubelet[2499]: E0123 01:10:38.858833 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p26ko.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.178:6443: connect: connection refused" interval="200ms" Jan 23 01:10:38.859481 kubelet[2499]: I0123 01:10:38.859439 2499 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:10:38.862442 kubelet[2499]: I0123 01:10:38.861475 2499 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:10:38.865131 kubelet[2499]: E0123 01:10:38.865103 2499 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:10:38.867952 kubelet[2499]: I0123 01:10:38.867928 2499 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:10:38.868063 kubelet[2499]: I0123 01:10:38.868045 2499 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:10:38.868238 kubelet[2499]: I0123 01:10:38.868213 2499 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:10:38.893640 kubelet[2499]: I0123 01:10:38.893611 2499 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:10:38.897765 kubelet[2499]: I0123 01:10:38.897451 2499 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:10:38.897765 kubelet[2499]: I0123 01:10:38.897493 2499 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:10:38.897765 kubelet[2499]: I0123 01:10:38.896665 2499 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 01:10:38.900802 kubelet[2499]: I0123 01:10:38.900473 2499 policy_none.go:49] "None policy: Start" Jan 23 01:10:38.900802 kubelet[2499]: I0123 01:10:38.900513 2499 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 01:10:38.900802 kubelet[2499]: I0123 01:10:38.900536 2499 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 01:10:38.901577 kubelet[2499]: I0123 01:10:38.901548 2499 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 01:10:38.901654 kubelet[2499]: I0123 01:10:38.901584 2499 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 01:10:38.901654 kubelet[2499]: I0123 01:10:38.901634 2499 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 01:10:38.901771 kubelet[2499]: E0123 01:10:38.901708 2499 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:10:38.903177 kubelet[2499]: I0123 01:10:38.903155 2499 policy_none.go:47] "Start" Jan 23 01:10:38.911082 kubelet[2499]: E0123 01:10:38.911035 2499 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.15.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.15.178:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:10:38.917126 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:10:38.932161 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:10:38.938826 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:10:38.958751 kubelet[2499]: E0123 01:10:38.957831 2499 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:10:38.958751 kubelet[2499]: I0123 01:10:38.958207 2499 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:10:38.958751 kubelet[2499]: I0123 01:10:38.958240 2499 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:10:38.962604 kubelet[2499]: E0123 01:10:38.962580 2499 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:10:38.962866 kubelet[2499]: E0123 01:10:38.962846 2499 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-p26ko.gb1.brightbox.com\" not found" Jan 23 01:10:38.962984 kubelet[2499]: I0123 01:10:38.962850 2499 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:10:39.022903 systemd[1]: Created slice kubepods-burstable-pod03f547a28470875a848c3d0fccc2a002.slice - libcontainer container kubepods-burstable-pod03f547a28470875a848c3d0fccc2a002.slice. Jan 23 01:10:39.036167 kubelet[2499]: E0123 01:10:39.036100 2499 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p26ko.gb1.brightbox.com\" not found" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.040226 systemd[1]: Created slice kubepods-burstable-pod37122917d1c33f33f4544464914ee9fa.slice - libcontainer container kubepods-burstable-pod37122917d1c33f33f4544464914ee9fa.slice. Jan 23 01:10:39.044371 kubelet[2499]: E0123 01:10:39.044331 2499 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p26ko.gb1.brightbox.com\" not found" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.048635 systemd[1]: Created slice kubepods-burstable-podac8726301add1fcccfac64197a7e94db.slice - libcontainer container kubepods-burstable-podac8726301add1fcccfac64197a7e94db.slice. Jan 23 01:10:39.051675 kubelet[2499]: E0123 01:10:39.051650 2499 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p26ko.gb1.brightbox.com\" not found" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.058673 kubelet[2499]: I0123 01:10:39.058632 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37122917d1c33f33f4544464914ee9fa-ca-certs\") pod \"kube-apiserver-srv-p26ko.gb1.brightbox.com\" (UID: \"37122917d1c33f33f4544464914ee9fa\") " pod="kube-system/kube-apiserver-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.058828 kubelet[2499]: I0123 01:10:39.058804 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37122917d1c33f33f4544464914ee9fa-k8s-certs\") pod \"kube-apiserver-srv-p26ko.gb1.brightbox.com\" (UID: \"37122917d1c33f33f4544464914ee9fa\") " pod="kube-system/kube-apiserver-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.058954 kubelet[2499]: I0123 01:10:39.058927 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37122917d1c33f33f4544464914ee9fa-usr-share-ca-certificates\") pod \"kube-apiserver-srv-p26ko.gb1.brightbox.com\" (UID: \"37122917d1c33f33f4544464914ee9fa\") " pod="kube-system/kube-apiserver-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.059071 kubelet[2499]: I0123 01:10:39.059047 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03f547a28470875a848c3d0fccc2a002-ca-certs\") pod \"kube-controller-manager-srv-p26ko.gb1.brightbox.com\" (UID: \"03f547a28470875a848c3d0fccc2a002\") " pod="kube-system/kube-controller-manager-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.059196 kubelet[2499]: I0123 01:10:39.059172 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03f547a28470875a848c3d0fccc2a002-flexvolume-dir\") pod \"kube-controller-manager-srv-p26ko.gb1.brightbox.com\" (UID: \"03f547a28470875a848c3d0fccc2a002\") " pod="kube-system/kube-controller-manager-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.059999 kubelet[2499]: E0123 01:10:39.059734 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p26ko.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.178:6443: connect: connection refused" interval="400ms" Jan 23 01:10:39.059999 kubelet[2499]: I0123 01:10:39.059285 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03f547a28470875a848c3d0fccc2a002-kubeconfig\") pod \"kube-controller-manager-srv-p26ko.gb1.brightbox.com\" (UID: \"03f547a28470875a848c3d0fccc2a002\") " pod="kube-system/kube-controller-manager-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.059999 kubelet[2499]: I0123 01:10:39.059813 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac8726301add1fcccfac64197a7e94db-kubeconfig\") pod \"kube-scheduler-srv-p26ko.gb1.brightbox.com\" (UID: \"ac8726301add1fcccfac64197a7e94db\") " pod="kube-system/kube-scheduler-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.059999 kubelet[2499]: I0123 01:10:39.059846 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03f547a28470875a848c3d0fccc2a002-k8s-certs\") pod \"kube-controller-manager-srv-p26ko.gb1.brightbox.com\" (UID: \"03f547a28470875a848c3d0fccc2a002\") " pod="kube-system/kube-controller-manager-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.059999 kubelet[2499]: I0123 01:10:39.059880 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03f547a28470875a848c3d0fccc2a002-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-p26ko.gb1.brightbox.com\" (UID: \"03f547a28470875a848c3d0fccc2a002\") " pod="kube-system/kube-controller-manager-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.061232 kubelet[2499]: I0123 01:10:39.061199 2499 kubelet_node_status.go:75] "Attempting to register node" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.061770 kubelet[2499]: E0123 01:10:39.061704 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.15.178:6443/api/v1/nodes\": dial tcp 10.230.15.178:6443: connect: connection refused" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.264952 kubelet[2499]: I0123 01:10:39.264914 2499 kubelet_node_status.go:75] "Attempting to register node" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.265487 kubelet[2499]: E0123 01:10:39.265441 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.15.178:6443/api/v1/nodes\": dial tcp 10.230.15.178:6443: connect: connection refused" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.343412 containerd[1589]: time="2026-01-23T01:10:39.342633744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-p26ko.gb1.brightbox.com,Uid:03f547a28470875a848c3d0fccc2a002,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:39.347385 containerd[1589]: time="2026-01-23T01:10:39.347336902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-p26ko.gb1.brightbox.com,Uid:37122917d1c33f33f4544464914ee9fa,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:39.354371 containerd[1589]: time="2026-01-23T01:10:39.354336406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-p26ko.gb1.brightbox.com,Uid:ac8726301add1fcccfac64197a7e94db,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:39.461470 kubelet[2499]: E0123 01:10:39.461409 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p26ko.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.178:6443: connect: connection refused" interval="800ms" Jan 23 01:10:39.669441 kubelet[2499]: I0123 01:10:39.669233 2499 kubelet_node_status.go:75] "Attempting to register node" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.669817 kubelet[2499]: E0123 01:10:39.669772 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.15.178:6443/api/v1/nodes\": dial tcp 10.230.15.178:6443: connect: connection refused" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:39.897026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2916475008.mount: Deactivated successfully. Jan 23 01:10:39.901355 containerd[1589]: time="2026-01-23T01:10:39.901259038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:39.903102 containerd[1589]: time="2026-01-23T01:10:39.903048848Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 23 01:10:39.905524 containerd[1589]: time="2026-01-23T01:10:39.905472527Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:39.909522 containerd[1589]: time="2026-01-23T01:10:39.909196086Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:39.911416 containerd[1589]: time="2026-01-23T01:10:39.910958890Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 01:10:39.911416 containerd[1589]: time="2026-01-23T01:10:39.911033892Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:39.911963 containerd[1589]: time="2026-01-23T01:10:39.911929941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:39.912603 containerd[1589]: time="2026-01-23T01:10:39.912568116Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 01:10:39.915466 containerd[1589]: time="2026-01-23T01:10:39.914716033Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 569.282767ms" Jan 23 01:10:39.918855 containerd[1589]: time="2026-01-23T01:10:39.918807193Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 570.136949ms" Jan 23 01:10:39.926480 containerd[1589]: time="2026-01-23T01:10:39.926052048Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 570.459603ms" Jan 23 01:10:40.064731 containerd[1589]: time="2026-01-23T01:10:40.064647257Z" level=info msg="connecting to shim ffaa5cd046a723e525daff7aa6a1c46a236ac84d7f4023432daf8ca744518fc0" address="unix:///run/containerd/s/dadeabd65705920531ca69bc21d1fd1700607e4cf309d612f043c843c772b1e9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:40.067042 containerd[1589]: time="2026-01-23T01:10:40.066947031Z" level=info msg="connecting to shim 7b656a8259b4d5bf7fbf8d0a6f8915c8f03f306cf948278a07230e18748b2dad" address="unix:///run/containerd/s/e0639af5649b48a741a369e49d52ac63150a394cd0bc619f8d30ca4a83aedae3" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:40.072541 containerd[1589]: time="2026-01-23T01:10:40.072506146Z" level=info msg="connecting to shim 612941b09123ba05b32f17b18d080de0eabdb987c938d0b014f7ddde820e412b" address="unix:///run/containerd/s/79d88b18e3591dfa9d6701dd3549d2b96b2a0a6e84b0afd44db36ed2e3c84a04" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:40.122054 kubelet[2499]: E0123 01:10:40.121987 2499 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.15.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.15.178:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:10:40.183672 systemd[1]: Started cri-containerd-612941b09123ba05b32f17b18d080de0eabdb987c938d0b014f7ddde820e412b.scope - libcontainer container 612941b09123ba05b32f17b18d080de0eabdb987c938d0b014f7ddde820e412b. Jan 23 01:10:40.187236 systemd[1]: Started cri-containerd-7b656a8259b4d5bf7fbf8d0a6f8915c8f03f306cf948278a07230e18748b2dad.scope - libcontainer container 7b656a8259b4d5bf7fbf8d0a6f8915c8f03f306cf948278a07230e18748b2dad. Jan 23 01:10:40.189693 systemd[1]: Started cri-containerd-ffaa5cd046a723e525daff7aa6a1c46a236ac84d7f4023432daf8ca744518fc0.scope - libcontainer container ffaa5cd046a723e525daff7aa6a1c46a236ac84d7f4023432daf8ca744518fc0. Jan 23 01:10:40.263128 kubelet[2499]: E0123 01:10:40.263079 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p26ko.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.178:6443: connect: connection refused" interval="1.6s" Jan 23 01:10:40.308511 kubelet[2499]: E0123 01:10:40.308454 2499 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.15.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.15.178:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:10:40.320652 containerd[1589]: time="2026-01-23T01:10:40.320436469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-p26ko.gb1.brightbox.com,Uid:ac8726301add1fcccfac64197a7e94db,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b656a8259b4d5bf7fbf8d0a6f8915c8f03f306cf948278a07230e18748b2dad\"" Jan 23 01:10:40.329410 containerd[1589]: time="2026-01-23T01:10:40.328481649Z" level=info msg="CreateContainer within sandbox \"7b656a8259b4d5bf7fbf8d0a6f8915c8f03f306cf948278a07230e18748b2dad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 01:10:40.337184 containerd[1589]: time="2026-01-23T01:10:40.336983449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-p26ko.gb1.brightbox.com,Uid:37122917d1c33f33f4544464914ee9fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"612941b09123ba05b32f17b18d080de0eabdb987c938d0b014f7ddde820e412b\"" Jan 23 01:10:40.341895 kubelet[2499]: E0123 01:10:40.341852 2499 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.15.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-p26ko.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.15.178:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:10:40.344457 containerd[1589]: time="2026-01-23T01:10:40.344364357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-p26ko.gb1.brightbox.com,Uid:03f547a28470875a848c3d0fccc2a002,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffaa5cd046a723e525daff7aa6a1c46a236ac84d7f4023432daf8ca744518fc0\"" Jan 23 01:10:40.345316 containerd[1589]: time="2026-01-23T01:10:40.345273394Z" level=info msg="CreateContainer within sandbox \"612941b09123ba05b32f17b18d080de0eabdb987c938d0b014f7ddde820e412b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 01:10:40.346564 containerd[1589]: time="2026-01-23T01:10:40.346514911Z" level=info msg="Container 8d1c1fa19c8a809e9ae547350651b275bf5bbb092c992b22543fc6d9c8f81446: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:40.357847 containerd[1589]: time="2026-01-23T01:10:40.357814631Z" level=info msg="CreateContainer within sandbox \"ffaa5cd046a723e525daff7aa6a1c46a236ac84d7f4023432daf8ca744518fc0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 01:10:40.364089 containerd[1589]: time="2026-01-23T01:10:40.363996861Z" level=info msg="Container 3644a5110a9bce03ff1fbbe60b204f4b4006cd45f439870bde437e2918469629: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:40.367171 containerd[1589]: time="2026-01-23T01:10:40.367093501Z" level=info msg="CreateContainer within sandbox \"7b656a8259b4d5bf7fbf8d0a6f8915c8f03f306cf948278a07230e18748b2dad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8d1c1fa19c8a809e9ae547350651b275bf5bbb092c992b22543fc6d9c8f81446\"" Jan 23 01:10:40.368424 containerd[1589]: time="2026-01-23T01:10:40.368376236Z" level=info msg="StartContainer for \"8d1c1fa19c8a809e9ae547350651b275bf5bbb092c992b22543fc6d9c8f81446\"" Jan 23 01:10:40.370602 containerd[1589]: time="2026-01-23T01:10:40.370565407Z" level=info msg="connecting to shim 8d1c1fa19c8a809e9ae547350651b275bf5bbb092c992b22543fc6d9c8f81446" address="unix:///run/containerd/s/e0639af5649b48a741a369e49d52ac63150a394cd0bc619f8d30ca4a83aedae3" protocol=ttrpc version=3 Jan 23 01:10:40.375410 containerd[1589]: time="2026-01-23T01:10:40.375349390Z" level=info msg="Container 896a8d652b3762ae5bf186a08d3d040ea7e57668d070213cd143968d6d49c86f: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:40.375595 containerd[1589]: time="2026-01-23T01:10:40.375559604Z" level=info msg="CreateContainer within sandbox \"612941b09123ba05b32f17b18d080de0eabdb987c938d0b014f7ddde820e412b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3644a5110a9bce03ff1fbbe60b204f4b4006cd45f439870bde437e2918469629\"" Jan 23 01:10:40.377321 containerd[1589]: time="2026-01-23T01:10:40.375965907Z" level=info msg="StartContainer for \"3644a5110a9bce03ff1fbbe60b204f4b4006cd45f439870bde437e2918469629\"" Jan 23 01:10:40.377321 containerd[1589]: time="2026-01-23T01:10:40.377253836Z" level=info msg="connecting to shim 3644a5110a9bce03ff1fbbe60b204f4b4006cd45f439870bde437e2918469629" address="unix:///run/containerd/s/79d88b18e3591dfa9d6701dd3549d2b96b2a0a6e84b0afd44db36ed2e3c84a04" protocol=ttrpc version=3 Jan 23 01:10:40.388376 kubelet[2499]: E0123 01:10:40.388318 2499 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.15.178:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.15.178:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:10:40.389027 containerd[1589]: time="2026-01-23T01:10:40.388992785Z" level=info msg="CreateContainer within sandbox \"ffaa5cd046a723e525daff7aa6a1c46a236ac84d7f4023432daf8ca744518fc0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"896a8d652b3762ae5bf186a08d3d040ea7e57668d070213cd143968d6d49c86f\"" Jan 23 01:10:40.390798 containerd[1589]: time="2026-01-23T01:10:40.390762441Z" level=info msg="StartContainer for \"896a8d652b3762ae5bf186a08d3d040ea7e57668d070213cd143968d6d49c86f\"" Jan 23 01:10:40.392188 containerd[1589]: time="2026-01-23T01:10:40.392148208Z" level=info msg="connecting to shim 896a8d652b3762ae5bf186a08d3d040ea7e57668d070213cd143968d6d49c86f" address="unix:///run/containerd/s/dadeabd65705920531ca69bc21d1fd1700607e4cf309d612f043c843c772b1e9" protocol=ttrpc version=3 Jan 23 01:10:40.409655 systemd[1]: Started cri-containerd-8d1c1fa19c8a809e9ae547350651b275bf5bbb092c992b22543fc6d9c8f81446.scope - libcontainer container 8d1c1fa19c8a809e9ae547350651b275bf5bbb092c992b22543fc6d9c8f81446. Jan 23 01:10:40.437174 systemd[1]: Started cri-containerd-896a8d652b3762ae5bf186a08d3d040ea7e57668d070213cd143968d6d49c86f.scope - libcontainer container 896a8d652b3762ae5bf186a08d3d040ea7e57668d070213cd143968d6d49c86f. Jan 23 01:10:40.448542 systemd[1]: Started cri-containerd-3644a5110a9bce03ff1fbbe60b204f4b4006cd45f439870bde437e2918469629.scope - libcontainer container 3644a5110a9bce03ff1fbbe60b204f4b4006cd45f439870bde437e2918469629. Jan 23 01:10:40.476490 kubelet[2499]: I0123 01:10:40.476451 2499 kubelet_node_status.go:75] "Attempting to register node" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:40.477133 kubelet[2499]: E0123 01:10:40.477081 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.15.178:6443/api/v1/nodes\": dial tcp 10.230.15.178:6443: connect: connection refused" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:40.594651 containerd[1589]: time="2026-01-23T01:10:40.593794445Z" level=info msg="StartContainer for \"8d1c1fa19c8a809e9ae547350651b275bf5bbb092c992b22543fc6d9c8f81446\" returns successfully" Jan 23 01:10:40.595762 containerd[1589]: time="2026-01-23T01:10:40.595727995Z" level=info msg="StartContainer for \"3644a5110a9bce03ff1fbbe60b204f4b4006cd45f439870bde437e2918469629\" returns successfully" Jan 23 01:10:40.598015 containerd[1589]: time="2026-01-23T01:10:40.597981655Z" level=info msg="StartContainer for \"896a8d652b3762ae5bf186a08d3d040ea7e57668d070213cd143968d6d49c86f\" returns successfully" Jan 23 01:10:40.894630 kubelet[2499]: E0123 01:10:40.894579 2499 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.15.178:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.15.178:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:10:40.927918 kubelet[2499]: E0123 01:10:40.927808 2499 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p26ko.gb1.brightbox.com\" not found" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:40.933536 kubelet[2499]: E0123 01:10:40.932298 2499 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p26ko.gb1.brightbox.com\" not found" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:40.937674 kubelet[2499]: E0123 01:10:40.937623 2499 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p26ko.gb1.brightbox.com\" not found" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:41.939981 kubelet[2499]: E0123 01:10:41.939943 2499 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p26ko.gb1.brightbox.com\" not found" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:41.940535 kubelet[2499]: E0123 01:10:41.940310 2499 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-p26ko.gb1.brightbox.com\" not found" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:42.081786 kubelet[2499]: I0123 01:10:42.080355 2499 kubelet_node_status.go:75] "Attempting to register node" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:44.174054 kubelet[2499]: E0123 01:10:44.173995 2499 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-p26ko.gb1.brightbox.com\" not found" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:44.294759 kubelet[2499]: I0123 01:10:44.294669 2499 kubelet_node_status.go:78] "Successfully registered node" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:44.295286 kubelet[2499]: E0123 01:10:44.294978 2499 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"srv-p26ko.gb1.brightbox.com\": node \"srv-p26ko.gb1.brightbox.com\" not found" Jan 23 01:10:44.358645 kubelet[2499]: I0123 01:10:44.357647 2499 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:44.368015 kubelet[2499]: E0123 01:10:44.367982 2499 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-p26ko.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:44.368243 kubelet[2499]: I0123 01:10:44.368216 2499 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:44.371804 kubelet[2499]: E0123 01:10:44.371774 2499 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-p26ko.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:44.372454 kubelet[2499]: I0123 01:10:44.372430 2499 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:44.375320 kubelet[2499]: E0123 01:10:44.375278 2499 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-p26ko.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:44.535164 kubelet[2499]: I0123 01:10:44.534368 2499 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:44.539929 kubelet[2499]: E0123 01:10:44.539827 2499 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-p26ko.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:44.825201 kubelet[2499]: I0123 01:10:44.824798 2499 apiserver.go:52] "Watching apiserver" Jan 23 01:10:44.858309 kubelet[2499]: I0123 01:10:44.858240 2499 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 01:10:46.455326 systemd[1]: Reload requested from client PID 2791 ('systemctl') (unit session-11.scope)... Jan 23 01:10:46.455981 systemd[1]: Reloading... Jan 23 01:10:46.590427 zram_generator::config[2836]: No configuration found. Jan 23 01:10:46.983214 systemd[1]: Reloading finished in 526 ms. Jan 23 01:10:47.036707 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:47.054280 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:10:47.054690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:47.054763 systemd[1]: kubelet.service: Consumed 1.966s CPU time, 122.8M memory peak. Jan 23 01:10:47.058707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:47.333235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:47.345103 (kubelet)[2899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:10:47.429638 kubelet[2899]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:10:47.429638 kubelet[2899]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:10:47.430152 kubelet[2899]: I0123 01:10:47.429722 2899 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:10:47.446523 kubelet[2899]: I0123 01:10:47.446483 2899 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 01:10:47.446523 kubelet[2899]: I0123 01:10:47.446514 2899 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:10:47.446716 kubelet[2899]: I0123 01:10:47.446564 2899 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 01:10:47.446716 kubelet[2899]: I0123 01:10:47.446575 2899 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:10:47.448233 kubelet[2899]: I0123 01:10:47.446935 2899 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:10:47.449240 kubelet[2899]: I0123 01:10:47.449216 2899 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 01:10:47.455036 kubelet[2899]: I0123 01:10:47.454760 2899 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:10:47.492580 kubelet[2899]: I0123 01:10:47.492528 2899 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:10:47.499826 kubelet[2899]: I0123 01:10:47.499782 2899 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 01:10:47.502021 kubelet[2899]: I0123 01:10:47.501434 2899 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:10:47.502021 kubelet[2899]: I0123 01:10:47.501485 2899 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-p26ko.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:10:47.502021 kubelet[2899]: I0123 01:10:47.501767 2899 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:10:47.502021 kubelet[2899]: I0123 01:10:47.501781 2899 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 01:10:47.502332 kubelet[2899]: I0123 01:10:47.501816 2899 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 01:10:47.507762 kubelet[2899]: I0123 01:10:47.507090 2899 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:10:47.512082 kubelet[2899]: I0123 01:10:47.512061 2899 kubelet.go:475] "Attempting to sync node with API server" Jan 23 01:10:47.512372 kubelet[2899]: I0123 01:10:47.512349 2899 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:10:47.512529 kubelet[2899]: I0123 01:10:47.512510 2899 kubelet.go:387] "Adding apiserver pod source" Jan 23 01:10:47.513087 kubelet[2899]: I0123 01:10:47.513066 2899 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:10:47.520070 kubelet[2899]: I0123 01:10:47.520033 2899 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:10:47.520993 kubelet[2899]: I0123 01:10:47.520883 2899 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:10:47.521271 kubelet[2899]: I0123 01:10:47.521211 2899 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 01:10:47.528799 kubelet[2899]: I0123 01:10:47.528774 2899 server.go:1262] "Started kubelet" Jan 23 01:10:47.539561 kubelet[2899]: I0123 01:10:47.539520 2899 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:10:47.560624 kubelet[2899]: I0123 01:10:47.560525 2899 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:10:47.572246 kubelet[2899]: I0123 01:10:47.571117 2899 server.go:310] "Adding debug handlers to kubelet server" Jan 23 01:10:47.586780 kubelet[2899]: I0123 01:10:47.586652 2899 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:10:47.594135 kubelet[2899]: I0123 01:10:47.593527 2899 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 01:10:47.594135 kubelet[2899]: I0123 01:10:47.593796 2899 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:10:47.594135 kubelet[2899]: I0123 01:10:47.587331 2899 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:10:47.594135 kubelet[2899]: I0123 01:10:47.594005 2899 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 01:10:47.595767 kubelet[2899]: I0123 01:10:47.595703 2899 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 01:10:47.596358 kubelet[2899]: I0123 01:10:47.596230 2899 reconciler.go:29] "Reconciler: start to sync state" Jan 23 01:10:47.603028 kubelet[2899]: I0123 01:10:47.602966 2899 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:10:47.603345 kubelet[2899]: I0123 01:10:47.603273 2899 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:10:47.613028 kubelet[2899]: I0123 01:10:47.612939 2899 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 01:10:47.617107 kubelet[2899]: I0123 01:10:47.616893 2899 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:10:47.618360 kubelet[2899]: I0123 01:10:47.618325 2899 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 01:10:47.618973 kubelet[2899]: I0123 01:10:47.618499 2899 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 01:10:47.618973 kubelet[2899]: I0123 01:10:47.618563 2899 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 01:10:47.618973 kubelet[2899]: E0123 01:10:47.618643 2899 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:10:47.636778 kubelet[2899]: E0123 01:10:47.636738 2899 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:10:47.719013 kubelet[2899]: E0123 01:10:47.718968 2899 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 01:10:47.721313 kubelet[2899]: I0123 01:10:47.720924 2899 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:10:47.721447 kubelet[2899]: I0123 01:10:47.721424 2899 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:10:47.721599 kubelet[2899]: I0123 01:10:47.721572 2899 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:10:47.724238 kubelet[2899]: I0123 01:10:47.724129 2899 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 01:10:47.724488 kubelet[2899]: I0123 01:10:47.724453 2899 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 01:10:47.725611 kubelet[2899]: I0123 01:10:47.725471 2899 policy_none.go:49] "None policy: Start" Jan 23 01:10:47.725611 kubelet[2899]: I0123 01:10:47.725506 2899 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 01:10:47.725611 kubelet[2899]: I0123 01:10:47.725532 2899 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 01:10:47.726043 kubelet[2899]: I0123 01:10:47.725900 2899 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 01:10:47.726043 kubelet[2899]: I0123 01:10:47.725924 2899 policy_none.go:47] "Start" Jan 23 01:10:47.741434 kubelet[2899]: E0123 01:10:47.741360 2899 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:10:47.741929 kubelet[2899]: I0123 01:10:47.741719 2899 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:10:47.741929 kubelet[2899]: I0123 01:10:47.741744 2899 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:10:47.742870 kubelet[2899]: I0123 01:10:47.742463 2899 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:10:47.752240 kubelet[2899]: E0123 01:10:47.751691 2899 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:10:47.873451 kubelet[2899]: I0123 01:10:47.873055 2899 kubelet_node_status.go:75] "Attempting to register node" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:47.887656 kubelet[2899]: I0123 01:10:47.887439 2899 kubelet_node_status.go:124] "Node was previously registered" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:47.888120 kubelet[2899]: I0123 01:10:47.888002 2899 kubelet_node_status.go:78] "Successfully registered node" node="srv-p26ko.gb1.brightbox.com" Jan 23 01:10:47.923150 kubelet[2899]: I0123 01:10:47.923080 2899 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:47.924137 kubelet[2899]: I0123 01:10:47.924085 2899 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:47.934494 kubelet[2899]: I0123 01:10:47.924727 2899 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:47.944615 kubelet[2899]: I0123 01:10:47.944572 2899 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 01:10:47.945268 kubelet[2899]: I0123 01:10:47.945232 2899 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 01:10:47.954320 kubelet[2899]: I0123 01:10:47.954145 2899 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 01:10:48.000332 kubelet[2899]: I0123 01:10:47.999551 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03f547a28470875a848c3d0fccc2a002-flexvolume-dir\") pod \"kube-controller-manager-srv-p26ko.gb1.brightbox.com\" (UID: \"03f547a28470875a848c3d0fccc2a002\") " pod="kube-system/kube-controller-manager-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:48.000332 kubelet[2899]: I0123 01:10:47.999615 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03f547a28470875a848c3d0fccc2a002-kubeconfig\") pod \"kube-controller-manager-srv-p26ko.gb1.brightbox.com\" (UID: \"03f547a28470875a848c3d0fccc2a002\") " pod="kube-system/kube-controller-manager-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:48.000332 kubelet[2899]: I0123 01:10:47.999646 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03f547a28470875a848c3d0fccc2a002-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-p26ko.gb1.brightbox.com\" (UID: \"03f547a28470875a848c3d0fccc2a002\") " pod="kube-system/kube-controller-manager-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:48.000332 kubelet[2899]: I0123 01:10:47.999678 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac8726301add1fcccfac64197a7e94db-kubeconfig\") pod \"kube-scheduler-srv-p26ko.gb1.brightbox.com\" (UID: \"ac8726301add1fcccfac64197a7e94db\") " pod="kube-system/kube-scheduler-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:48.000332 kubelet[2899]: I0123 01:10:47.999720 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03f547a28470875a848c3d0fccc2a002-ca-certs\") pod \"kube-controller-manager-srv-p26ko.gb1.brightbox.com\" (UID: \"03f547a28470875a848c3d0fccc2a002\") " pod="kube-system/kube-controller-manager-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:48.001047 kubelet[2899]: I0123 01:10:47.999757 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03f547a28470875a848c3d0fccc2a002-k8s-certs\") pod \"kube-controller-manager-srv-p26ko.gb1.brightbox.com\" (UID: \"03f547a28470875a848c3d0fccc2a002\") " pod="kube-system/kube-controller-manager-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:48.001047 kubelet[2899]: I0123 01:10:47.999786 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37122917d1c33f33f4544464914ee9fa-ca-certs\") pod \"kube-apiserver-srv-p26ko.gb1.brightbox.com\" (UID: \"37122917d1c33f33f4544464914ee9fa\") " pod="kube-system/kube-apiserver-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:48.001047 kubelet[2899]: I0123 01:10:47.999820 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37122917d1c33f33f4544464914ee9fa-k8s-certs\") pod \"kube-apiserver-srv-p26ko.gb1.brightbox.com\" (UID: \"37122917d1c33f33f4544464914ee9fa\") " pod="kube-system/kube-apiserver-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:48.001047 kubelet[2899]: I0123 01:10:47.999845 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37122917d1c33f33f4544464914ee9fa-usr-share-ca-certificates\") pod \"kube-apiserver-srv-p26ko.gb1.brightbox.com\" (UID: \"37122917d1c33f33f4544464914ee9fa\") " pod="kube-system/kube-apiserver-srv-p26ko.gb1.brightbox.com" Jan 23 01:10:48.537648 kubelet[2899]: I0123 01:10:48.537551 2899 apiserver.go:52] "Watching apiserver" Jan 23 01:10:48.587576 kubelet[2899]: I0123 01:10:48.587482 2899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-p26ko.gb1.brightbox.com" podStartSLOduration=1.587426239 podStartE2EDuration="1.587426239s" podCreationTimestamp="2026-01-23 01:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:10:48.587003713 +0000 UTC m=+1.235592002" watchObservedRunningTime="2026-01-23 01:10:48.587426239 +0000 UTC m=+1.236014488" Jan 23 01:10:48.597325 kubelet[2899]: I0123 01:10:48.596728 2899 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 01:10:48.620215 kubelet[2899]: I0123 01:10:48.619941 2899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-p26ko.gb1.brightbox.com" podStartSLOduration=1.6199034970000001 podStartE2EDuration="1.619903497s" podCreationTimestamp="2026-01-23 01:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:10:48.618811999 +0000 UTC m=+1.267400273" watchObservedRunningTime="2026-01-23 01:10:48.619903497 +0000 UTC m=+1.268491764" Jan 23 01:10:48.621210 kubelet[2899]: I0123 01:10:48.621055 2899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-p26ko.gb1.brightbox.com" podStartSLOduration=1.621046181 podStartE2EDuration="1.621046181s" podCreationTimestamp="2026-01-23 01:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:10:48.602738834 +0000 UTC m=+1.251327108" watchObservedRunningTime="2026-01-23 01:10:48.621046181 +0000 UTC m=+1.269634449" Jan 23 01:10:51.366789 kubelet[2899]: I0123 01:10:51.366457 2899 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 01:10:51.367945 kubelet[2899]: I0123 01:10:51.367308 2899 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 01:10:51.368058 containerd[1589]: time="2026-01-23T01:10:51.367015461Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:10:52.392821 systemd[1]: Created slice kubepods-besteffort-podd26f1e97_6942_4de0_a60a_e926a753449f.slice - libcontainer container kubepods-besteffort-podd26f1e97_6942_4de0_a60a_e926a753449f.slice. Jan 23 01:10:52.430863 kubelet[2899]: I0123 01:10:52.430735 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d26f1e97-6942-4de0-a60a-e926a753449f-xtables-lock\") pod \"kube-proxy-xwkcp\" (UID: \"d26f1e97-6942-4de0-a60a-e926a753449f\") " pod="kube-system/kube-proxy-xwkcp" Jan 23 01:10:52.431514 kubelet[2899]: I0123 01:10:52.430790 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d26f1e97-6942-4de0-a60a-e926a753449f-kube-proxy\") pod \"kube-proxy-xwkcp\" (UID: \"d26f1e97-6942-4de0-a60a-e926a753449f\") " pod="kube-system/kube-proxy-xwkcp" Jan 23 01:10:52.431514 kubelet[2899]: I0123 01:10:52.430974 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d26f1e97-6942-4de0-a60a-e926a753449f-lib-modules\") pod \"kube-proxy-xwkcp\" (UID: \"d26f1e97-6942-4de0-a60a-e926a753449f\") " pod="kube-system/kube-proxy-xwkcp" Jan 23 01:10:52.431514 kubelet[2899]: I0123 01:10:52.431035 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wwvp\" (UniqueName: \"kubernetes.io/projected/d26f1e97-6942-4de0-a60a-e926a753449f-kube-api-access-4wwvp\") pod \"kube-proxy-xwkcp\" (UID: \"d26f1e97-6942-4de0-a60a-e926a753449f\") " pod="kube-system/kube-proxy-xwkcp" Jan 23 01:10:52.552085 systemd[1]: Created slice kubepods-besteffort-poded0168c4_7839_45d2_859e_a88da085253d.slice - libcontainer container kubepods-besteffort-poded0168c4_7839_45d2_859e_a88da085253d.slice. Jan 23 01:10:52.633731 kubelet[2899]: I0123 01:10:52.633671 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gwkr\" (UniqueName: \"kubernetes.io/projected/ed0168c4-7839-45d2-859e-a88da085253d-kube-api-access-5gwkr\") pod \"tigera-operator-65cdcdfd6d-cftsn\" (UID: \"ed0168c4-7839-45d2-859e-a88da085253d\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-cftsn" Jan 23 01:10:52.633957 kubelet[2899]: I0123 01:10:52.633766 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ed0168c4-7839-45d2-859e-a88da085253d-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-cftsn\" (UID: \"ed0168c4-7839-45d2-859e-a88da085253d\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-cftsn" Jan 23 01:10:52.707353 containerd[1589]: time="2026-01-23T01:10:52.706669617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xwkcp,Uid:d26f1e97-6942-4de0-a60a-e926a753449f,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:52.736415 containerd[1589]: time="2026-01-23T01:10:52.736261626Z" level=info msg="connecting to shim e5993d7c5f53233d91bc6cdeb0d06f051464ec6ace68e359ef6acacfcc50d289" address="unix:///run/containerd/s/35f3ce3a527db33beb077fba3e3372f70681e0ec0a71e1701f42e1bfec73756f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:52.791818 systemd[1]: Started cri-containerd-e5993d7c5f53233d91bc6cdeb0d06f051464ec6ace68e359ef6acacfcc50d289.scope - libcontainer container e5993d7c5f53233d91bc6cdeb0d06f051464ec6ace68e359ef6acacfcc50d289. Jan 23 01:10:52.843100 containerd[1589]: time="2026-01-23T01:10:52.843046302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xwkcp,Uid:d26f1e97-6942-4de0-a60a-e926a753449f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5993d7c5f53233d91bc6cdeb0d06f051464ec6ace68e359ef6acacfcc50d289\"" Jan 23 01:10:52.852912 containerd[1589]: time="2026-01-23T01:10:52.852865998Z" level=info msg="CreateContainer within sandbox \"e5993d7c5f53233d91bc6cdeb0d06f051464ec6ace68e359ef6acacfcc50d289\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:10:52.865716 containerd[1589]: time="2026-01-23T01:10:52.865593046Z" level=info msg="Container 54def4db3043ec214a717bdbf475d92153a4cec49e04100ef7106c3398f3f9dd: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:52.878894 containerd[1589]: time="2026-01-23T01:10:52.878564865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-cftsn,Uid:ed0168c4-7839-45d2-859e-a88da085253d,Namespace:tigera-operator,Attempt:0,}" Jan 23 01:10:52.888643 containerd[1589]: time="2026-01-23T01:10:52.888546198Z" level=info msg="CreateContainer within sandbox \"e5993d7c5f53233d91bc6cdeb0d06f051464ec6ace68e359ef6acacfcc50d289\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"54def4db3043ec214a717bdbf475d92153a4cec49e04100ef7106c3398f3f9dd\"" Jan 23 01:10:52.891211 containerd[1589]: time="2026-01-23T01:10:52.890286975Z" level=info msg="StartContainer for \"54def4db3043ec214a717bdbf475d92153a4cec49e04100ef7106c3398f3f9dd\"" Jan 23 01:10:52.893671 containerd[1589]: time="2026-01-23T01:10:52.893628534Z" level=info msg="connecting to shim 54def4db3043ec214a717bdbf475d92153a4cec49e04100ef7106c3398f3f9dd" address="unix:///run/containerd/s/35f3ce3a527db33beb077fba3e3372f70681e0ec0a71e1701f42e1bfec73756f" protocol=ttrpc version=3 Jan 23 01:10:52.916060 containerd[1589]: time="2026-01-23T01:10:52.915990019Z" level=info msg="connecting to shim 0781b3492aec8ca46a0b5af099ab8b9d577c3faf9a3ad165cb2338e448641e9a" address="unix:///run/containerd/s/7cfb63555f700f3bedb9201d995404e7d776ab52fb34051628702bd818aed041" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:52.931600 systemd[1]: Started cri-containerd-54def4db3043ec214a717bdbf475d92153a4cec49e04100ef7106c3398f3f9dd.scope - libcontainer container 54def4db3043ec214a717bdbf475d92153a4cec49e04100ef7106c3398f3f9dd. Jan 23 01:10:52.967011 systemd[1]: Started cri-containerd-0781b3492aec8ca46a0b5af099ab8b9d577c3faf9a3ad165cb2338e448641e9a.scope - libcontainer container 0781b3492aec8ca46a0b5af099ab8b9d577c3faf9a3ad165cb2338e448641e9a. Jan 23 01:10:53.059025 containerd[1589]: time="2026-01-23T01:10:53.058970318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-cftsn,Uid:ed0168c4-7839-45d2-859e-a88da085253d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0781b3492aec8ca46a0b5af099ab8b9d577c3faf9a3ad165cb2338e448641e9a\"" Jan 23 01:10:53.060568 containerd[1589]: time="2026-01-23T01:10:53.060079944Z" level=info msg="StartContainer for \"54def4db3043ec214a717bdbf475d92153a4cec49e04100ef7106c3398f3f9dd\" returns successfully" Jan 23 01:10:53.063412 containerd[1589]: time="2026-01-23T01:10:53.063165386Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 01:10:53.571741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount197250633.mount: Deactivated successfully. Jan 23 01:10:53.720745 kubelet[2899]: I0123 01:10:53.720117 2899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xwkcp" podStartSLOduration=1.720097631 podStartE2EDuration="1.720097631s" podCreationTimestamp="2026-01-23 01:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:10:53.719249557 +0000 UTC m=+6.367837857" watchObservedRunningTime="2026-01-23 01:10:53.720097631 +0000 UTC m=+6.368685897" Jan 23 01:10:54.838536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710168442.mount: Deactivated successfully. Jan 23 01:11:02.640893 containerd[1589]: time="2026-01-23T01:11:02.639826106Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:02.640893 containerd[1589]: time="2026-01-23T01:11:02.640846735Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 01:11:02.641798 containerd[1589]: time="2026-01-23T01:11:02.641761149Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:02.644252 containerd[1589]: time="2026-01-23T01:11:02.644219544Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:02.645252 containerd[1589]: time="2026-01-23T01:11:02.645197711Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 9.581989192s" Jan 23 01:11:02.645332 containerd[1589]: time="2026-01-23T01:11:02.645256944Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 01:11:02.651251 containerd[1589]: time="2026-01-23T01:11:02.651173351Z" level=info msg="CreateContainer within sandbox \"0781b3492aec8ca46a0b5af099ab8b9d577c3faf9a3ad165cb2338e448641e9a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 01:11:02.668420 containerd[1589]: time="2026-01-23T01:11:02.667323930Z" level=info msg="Container 210fe275ae01167232c3ba9b7b940bc3b7d908ab0219bdbafdf17df84a23e60e: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:02.677740 containerd[1589]: time="2026-01-23T01:11:02.677641994Z" level=info msg="CreateContainer within sandbox \"0781b3492aec8ca46a0b5af099ab8b9d577c3faf9a3ad165cb2338e448641e9a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"210fe275ae01167232c3ba9b7b940bc3b7d908ab0219bdbafdf17df84a23e60e\"" Jan 23 01:11:02.681021 containerd[1589]: time="2026-01-23T01:11:02.680965141Z" level=info msg="StartContainer for \"210fe275ae01167232c3ba9b7b940bc3b7d908ab0219bdbafdf17df84a23e60e\"" Jan 23 01:11:02.683041 containerd[1589]: time="2026-01-23T01:11:02.682905649Z" level=info msg="connecting to shim 210fe275ae01167232c3ba9b7b940bc3b7d908ab0219bdbafdf17df84a23e60e" address="unix:///run/containerd/s/7cfb63555f700f3bedb9201d995404e7d776ab52fb34051628702bd818aed041" protocol=ttrpc version=3 Jan 23 01:11:02.726725 systemd[1]: Started cri-containerd-210fe275ae01167232c3ba9b7b940bc3b7d908ab0219bdbafdf17df84a23e60e.scope - libcontainer container 210fe275ae01167232c3ba9b7b940bc3b7d908ab0219bdbafdf17df84a23e60e. Jan 23 01:11:02.786111 containerd[1589]: time="2026-01-23T01:11:02.785931564Z" level=info msg="StartContainer for \"210fe275ae01167232c3ba9b7b940bc3b7d908ab0219bdbafdf17df84a23e60e\" returns successfully" Jan 23 01:11:03.760462 kubelet[2899]: I0123 01:11:03.759810 2899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-cftsn" podStartSLOduration=2.175424391 podStartE2EDuration="11.759769217s" podCreationTimestamp="2026-01-23 01:10:52 +0000 UTC" firstStartedPulling="2026-01-23 01:10:53.062707987 +0000 UTC m=+5.711296240" lastFinishedPulling="2026-01-23 01:11:02.647052814 +0000 UTC m=+15.295641066" observedRunningTime="2026-01-23 01:11:03.759463526 +0000 UTC m=+16.408051820" watchObservedRunningTime="2026-01-23 01:11:03.759769217 +0000 UTC m=+16.408357485" Jan 23 01:11:06.622546 systemd[1]: cri-containerd-210fe275ae01167232c3ba9b7b940bc3b7d908ab0219bdbafdf17df84a23e60e.scope: Deactivated successfully. Jan 23 01:11:06.670074 containerd[1589]: time="2026-01-23T01:11:06.669997472Z" level=info msg="received container exit event container_id:\"210fe275ae01167232c3ba9b7b940bc3b7d908ab0219bdbafdf17df84a23e60e\" id:\"210fe275ae01167232c3ba9b7b940bc3b7d908ab0219bdbafdf17df84a23e60e\" pid:3225 exit_status:1 exited_at:{seconds:1769130666 nanos:630376721}" Jan 23 01:11:06.723091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-210fe275ae01167232c3ba9b7b940bc3b7d908ab0219bdbafdf17df84a23e60e-rootfs.mount: Deactivated successfully. Jan 23 01:11:07.764254 kubelet[2899]: I0123 01:11:07.764183 2899 scope.go:117] "RemoveContainer" containerID="210fe275ae01167232c3ba9b7b940bc3b7d908ab0219bdbafdf17df84a23e60e" Jan 23 01:11:07.770594 containerd[1589]: time="2026-01-23T01:11:07.770548296Z" level=info msg="CreateContainer within sandbox \"0781b3492aec8ca46a0b5af099ab8b9d577c3faf9a3ad165cb2338e448641e9a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 01:11:07.786983 containerd[1589]: time="2026-01-23T01:11:07.786474544Z" level=info msg="Container 8aca799dd0756ff1770af535d849ac0fd4c958675c71a0143cd402fc28d84a32: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:07.803131 containerd[1589]: time="2026-01-23T01:11:07.803032625Z" level=info msg="CreateContainer within sandbox \"0781b3492aec8ca46a0b5af099ab8b9d577c3faf9a3ad165cb2338e448641e9a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8aca799dd0756ff1770af535d849ac0fd4c958675c71a0143cd402fc28d84a32\"" Jan 23 01:11:07.804021 containerd[1589]: time="2026-01-23T01:11:07.803982202Z" level=info msg="StartContainer for \"8aca799dd0756ff1770af535d849ac0fd4c958675c71a0143cd402fc28d84a32\"" Jan 23 01:11:07.805997 containerd[1589]: time="2026-01-23T01:11:07.805860173Z" level=info msg="connecting to shim 8aca799dd0756ff1770af535d849ac0fd4c958675c71a0143cd402fc28d84a32" address="unix:///run/containerd/s/7cfb63555f700f3bedb9201d995404e7d776ab52fb34051628702bd818aed041" protocol=ttrpc version=3 Jan 23 01:11:07.849610 systemd[1]: Started cri-containerd-8aca799dd0756ff1770af535d849ac0fd4c958675c71a0143cd402fc28d84a32.scope - libcontainer container 8aca799dd0756ff1770af535d849ac0fd4c958675c71a0143cd402fc28d84a32. Jan 23 01:11:07.919250 containerd[1589]: time="2026-01-23T01:11:07.919201793Z" level=info msg="StartContainer for \"8aca799dd0756ff1770af535d849ac0fd4c958675c71a0143cd402fc28d84a32\" returns successfully" Jan 23 01:11:10.171743 sudo[1891]: pam_unix(sudo:session): session closed for user root Jan 23 01:11:10.265661 sshd[1890]: Connection closed by 20.161.92.111 port 36708 Jan 23 01:11:10.266696 sshd-session[1885]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:10.273628 systemd-logind[1569]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:11:10.274711 systemd[1]: sshd@8-10.230.15.178:22-20.161.92.111:36708.service: Deactivated successfully. Jan 23 01:11:10.278126 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:11:10.278672 systemd[1]: session-11.scope: Consumed 7.766s CPU time, 160.8M memory peak. Jan 23 01:11:10.282643 systemd-logind[1569]: Removed session 11. Jan 23 01:11:18.776812 systemd[1]: Created slice kubepods-besteffort-pod2e200eef_18ae_4255_863d_cdc5eafe5ab5.slice - libcontainer container kubepods-besteffort-pod2e200eef_18ae_4255_863d_cdc5eafe5ab5.slice. Jan 23 01:11:18.823727 kubelet[2899]: I0123 01:11:18.823605 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2e200eef-18ae-4255-863d-cdc5eafe5ab5-typha-certs\") pod \"calico-typha-55b9d8988c-jjscd\" (UID: \"2e200eef-18ae-4255-863d-cdc5eafe5ab5\") " pod="calico-system/calico-typha-55b9d8988c-jjscd" Jan 23 01:11:18.824851 kubelet[2899]: I0123 01:11:18.823700 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e200eef-18ae-4255-863d-cdc5eafe5ab5-tigera-ca-bundle\") pod \"calico-typha-55b9d8988c-jjscd\" (UID: \"2e200eef-18ae-4255-863d-cdc5eafe5ab5\") " pod="calico-system/calico-typha-55b9d8988c-jjscd" Jan 23 01:11:18.824851 kubelet[2899]: I0123 01:11:18.824741 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldpm\" (UniqueName: \"kubernetes.io/projected/2e200eef-18ae-4255-863d-cdc5eafe5ab5-kube-api-access-7ldpm\") pod \"calico-typha-55b9d8988c-jjscd\" (UID: \"2e200eef-18ae-4255-863d-cdc5eafe5ab5\") " pod="calico-system/calico-typha-55b9d8988c-jjscd" Jan 23 01:11:18.988355 systemd[1]: Created slice kubepods-besteffort-pod2ab3c867_17a3_4bdc_83dd_eabe7ca64f9e.slice - libcontainer container kubepods-besteffort-pod2ab3c867_17a3_4bdc_83dd_eabe7ca64f9e.slice. Jan 23 01:11:19.026014 kubelet[2899]: I0123 01:11:19.025964 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e-node-certs\") pod \"calico-node-7j9rd\" (UID: \"2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e\") " pod="calico-system/calico-node-7j9rd" Jan 23 01:11:19.026255 kubelet[2899]: I0123 01:11:19.026024 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e-cni-log-dir\") pod \"calico-node-7j9rd\" (UID: \"2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e\") " pod="calico-system/calico-node-7j9rd" Jan 23 01:11:19.026255 kubelet[2899]: I0123 01:11:19.026057 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e-cni-net-dir\") pod \"calico-node-7j9rd\" (UID: \"2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e\") " pod="calico-system/calico-node-7j9rd" Jan 23 01:11:19.026361 kubelet[2899]: I0123 01:11:19.026086 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e-tigera-ca-bundle\") pod \"calico-node-7j9rd\" (UID: \"2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e\") " pod="calico-system/calico-node-7j9rd" Jan 23 01:11:19.026361 kubelet[2899]: I0123 01:11:19.026331 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e-flexvol-driver-host\") pod \"calico-node-7j9rd\" (UID: \"2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e\") " pod="calico-system/calico-node-7j9rd" Jan 23 01:11:19.026973 kubelet[2899]: I0123 01:11:19.026362 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e-lib-modules\") pod \"calico-node-7j9rd\" (UID: \"2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e\") " pod="calico-system/calico-node-7j9rd" Jan 23 01:11:19.026973 kubelet[2899]: I0123 01:11:19.026437 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e-policysync\") pod \"calico-node-7j9rd\" (UID: \"2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e\") " pod="calico-system/calico-node-7j9rd" Jan 23 01:11:19.026973 kubelet[2899]: I0123 01:11:19.026468 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e-var-lib-calico\") pod \"calico-node-7j9rd\" (UID: \"2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e\") " pod="calico-system/calico-node-7j9rd" Jan 23 01:11:19.026973 kubelet[2899]: I0123 01:11:19.026506 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e-var-run-calico\") pod \"calico-node-7j9rd\" (UID: \"2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e\") " pod="calico-system/calico-node-7j9rd" Jan 23 01:11:19.026973 kubelet[2899]: I0123 01:11:19.026563 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e-cni-bin-dir\") pod \"calico-node-7j9rd\" (UID: \"2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e\") " pod="calico-system/calico-node-7j9rd" Jan 23 01:11:19.027430 kubelet[2899]: I0123 01:11:19.026623 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e-xtables-lock\") pod \"calico-node-7j9rd\" (UID: \"2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e\") " pod="calico-system/calico-node-7j9rd" Jan 23 01:11:19.027430 kubelet[2899]: I0123 01:11:19.026797 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hqv7\" (UniqueName: \"kubernetes.io/projected/2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e-kube-api-access-8hqv7\") pod \"calico-node-7j9rd\" (UID: \"2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e\") " pod="calico-system/calico-node-7j9rd" Jan 23 01:11:19.094302 containerd[1589]: time="2026-01-23T01:11:19.094149946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55b9d8988c-jjscd,Uid:2e200eef-18ae-4255-863d-cdc5eafe5ab5,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:19.137736 kubelet[2899]: E0123 01:11:19.137122 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.138355 kubelet[2899]: W0123 01:11:19.138116 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.138355 kubelet[2899]: E0123 01:11:19.138288 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.145599 kubelet[2899]: E0123 01:11:19.141232 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.145599 kubelet[2899]: W0123 01:11:19.141256 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.145599 kubelet[2899]: E0123 01:11:19.141276 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.180968 kubelet[2899]: E0123 01:11:19.180910 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.181327 kubelet[2899]: W0123 01:11:19.181301 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.181672 kubelet[2899]: E0123 01:11:19.181434 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.182283 kubelet[2899]: E0123 01:11:19.182226 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:11:19.190542 kubelet[2899]: E0123 01:11:19.190378 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.190542 kubelet[2899]: W0123 01:11:19.190493 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.190689 kubelet[2899]: E0123 01:11:19.190514 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.191413 kubelet[2899]: E0123 01:11:19.191098 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.191413 kubelet[2899]: W0123 01:11:19.191119 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.191413 kubelet[2899]: E0123 01:11:19.191170 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.192370 kubelet[2899]: E0123 01:11:19.191837 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.192370 kubelet[2899]: W0123 01:11:19.192225 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.192370 kubelet[2899]: E0123 01:11:19.192245 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.194583 kubelet[2899]: E0123 01:11:19.194466 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.195924 kubelet[2899]: W0123 01:11:19.194809 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.195924 kubelet[2899]: E0123 01:11:19.194845 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.197043 kubelet[2899]: E0123 01:11:19.197013 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.197749 kubelet[2899]: W0123 01:11:19.197706 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.197749 kubelet[2899]: E0123 01:11:19.197741 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.198105 kubelet[2899]: E0123 01:11:19.198048 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.198105 kubelet[2899]: W0123 01:11:19.198090 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.198105 kubelet[2899]: E0123 01:11:19.198107 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.199096 kubelet[2899]: E0123 01:11:19.199073 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.199178 kubelet[2899]: W0123 01:11:19.199099 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.199178 kubelet[2899]: E0123 01:11:19.199139 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.199535 kubelet[2899]: E0123 01:11:19.199512 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.199535 kubelet[2899]: W0123 01:11:19.199533 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.199654 kubelet[2899]: E0123 01:11:19.199549 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.200632 kubelet[2899]: E0123 01:11:19.200585 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.200632 kubelet[2899]: W0123 01:11:19.200608 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.200632 kubelet[2899]: E0123 01:11:19.200625 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.201078 kubelet[2899]: E0123 01:11:19.201046 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.201078 kubelet[2899]: W0123 01:11:19.201068 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.201190 kubelet[2899]: E0123 01:11:19.201086 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.201907 kubelet[2899]: E0123 01:11:19.201872 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.201907 kubelet[2899]: W0123 01:11:19.201894 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.201907 kubelet[2899]: E0123 01:11:19.201910 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.202880 kubelet[2899]: E0123 01:11:19.202803 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.202880 kubelet[2899]: W0123 01:11:19.202824 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.202880 kubelet[2899]: E0123 01:11:19.202840 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.203222 kubelet[2899]: E0123 01:11:19.203086 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.203222 kubelet[2899]: W0123 01:11:19.203100 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.203222 kubelet[2899]: E0123 01:11:19.203117 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.204530 kubelet[2899]: E0123 01:11:19.204480 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.204530 kubelet[2899]: W0123 01:11:19.204502 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.204530 kubelet[2899]: E0123 01:11:19.204519 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.205116 kubelet[2899]: E0123 01:11:19.204756 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.205116 kubelet[2899]: W0123 01:11:19.204769 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.205116 kubelet[2899]: E0123 01:11:19.204796 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.211438 kubelet[2899]: E0123 01:11:19.211409 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.211438 kubelet[2899]: W0123 01:11:19.211435 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.211790 kubelet[2899]: E0123 01:11:19.211459 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.211790 kubelet[2899]: E0123 01:11:19.211770 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.211790 kubelet[2899]: W0123 01:11:19.211797 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.212059 kubelet[2899]: E0123 01:11:19.211812 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.212059 kubelet[2899]: E0123 01:11:19.212049 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.212141 kubelet[2899]: W0123 01:11:19.212062 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.212141 kubelet[2899]: E0123 01:11:19.212077 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.216135 kubelet[2899]: E0123 01:11:19.216094 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.216135 kubelet[2899]: W0123 01:11:19.216116 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.216135 kubelet[2899]: E0123 01:11:19.216135 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.217044 kubelet[2899]: E0123 01:11:19.216412 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.217044 kubelet[2899]: W0123 01:11:19.216427 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.217044 kubelet[2899]: E0123 01:11:19.216442 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.233965 kubelet[2899]: E0123 01:11:19.233742 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.233965 kubelet[2899]: W0123 01:11:19.233909 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.235376 kubelet[2899]: E0123 01:11:19.233936 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.235376 kubelet[2899]: I0123 01:11:19.234710 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac789593-88de-4afb-9cdb-f9323fe8cb8a-kubelet-dir\") pod \"csi-node-driver-2q95q\" (UID: \"ac789593-88de-4afb-9cdb-f9323fe8cb8a\") " pod="calico-system/csi-node-driver-2q95q" Jan 23 01:11:19.235914 kubelet[2899]: E0123 01:11:19.235864 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.235914 kubelet[2899]: W0123 01:11:19.235891 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.235914 kubelet[2899]: E0123 01:11:19.235910 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.237274 kubelet[2899]: E0123 01:11:19.237242 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.237274 kubelet[2899]: W0123 01:11:19.237265 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.237468 kubelet[2899]: E0123 01:11:19.237402 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.238832 kubelet[2899]: E0123 01:11:19.238808 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.238832 kubelet[2899]: W0123 01:11:19.238830 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.239203 kubelet[2899]: E0123 01:11:19.238847 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.239203 kubelet[2899]: I0123 01:11:19.238880 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac789593-88de-4afb-9cdb-f9323fe8cb8a-registration-dir\") pod \"csi-node-driver-2q95q\" (UID: \"ac789593-88de-4afb-9cdb-f9323fe8cb8a\") " pod="calico-system/csi-node-driver-2q95q" Jan 23 01:11:19.239893 kubelet[2899]: E0123 01:11:19.239735 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.239893 kubelet[2899]: W0123 01:11:19.239770 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.239893 kubelet[2899]: E0123 01:11:19.239840 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.240666 kubelet[2899]: E0123 01:11:19.240646 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.240936 kubelet[2899]: W0123 01:11:19.240867 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.240936 kubelet[2899]: E0123 01:11:19.240893 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.241770 containerd[1589]: time="2026-01-23T01:11:19.241361989Z" level=info msg="connecting to shim f8d8f6b5c15e15563a4aa0c4fa67fafba10977d3b4b8caea4c6850b66b074922" address="unix:///run/containerd/s/513791f167f7f756f11b056807e1029b08010bacda2e14475b228263da023027" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:19.241852 kubelet[2899]: E0123 01:11:19.241582 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.241852 kubelet[2899]: W0123 01:11:19.241595 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.241852 kubelet[2899]: E0123 01:11:19.241610 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.241852 kubelet[2899]: I0123 01:11:19.241641 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac789593-88de-4afb-9cdb-f9323fe8cb8a-socket-dir\") pod \"csi-node-driver-2q95q\" (UID: \"ac789593-88de-4afb-9cdb-f9323fe8cb8a\") " pod="calico-system/csi-node-driver-2q95q" Jan 23 01:11:19.242652 kubelet[2899]: E0123 01:11:19.242510 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.242652 kubelet[2899]: W0123 01:11:19.242532 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.242652 kubelet[2899]: E0123 01:11:19.242549 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.242652 kubelet[2899]: I0123 01:11:19.242571 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ac789593-88de-4afb-9cdb-f9323fe8cb8a-varrun\") pod \"csi-node-driver-2q95q\" (UID: \"ac789593-88de-4afb-9cdb-f9323fe8cb8a\") " pod="calico-system/csi-node-driver-2q95q" Jan 23 01:11:19.243551 kubelet[2899]: E0123 01:11:19.243347 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.243551 kubelet[2899]: W0123 01:11:19.243371 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.243551 kubelet[2899]: E0123 01:11:19.243387 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.243551 kubelet[2899]: I0123 01:11:19.243427 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcwsw\" (UniqueName: \"kubernetes.io/projected/ac789593-88de-4afb-9cdb-f9323fe8cb8a-kube-api-access-mcwsw\") pod \"csi-node-driver-2q95q\" (UID: \"ac789593-88de-4afb-9cdb-f9323fe8cb8a\") " pod="calico-system/csi-node-driver-2q95q" Jan 23 01:11:19.244103 kubelet[2899]: E0123 01:11:19.244059 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.244413 kubelet[2899]: W0123 01:11:19.244280 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.244413 kubelet[2899]: E0123 01:11:19.244309 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.244821 kubelet[2899]: E0123 01:11:19.244802 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.245089 kubelet[2899]: W0123 01:11:19.244915 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.245089 kubelet[2899]: E0123 01:11:19.244939 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.245347 kubelet[2899]: E0123 01:11:19.245329 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.245587 kubelet[2899]: W0123 01:11:19.245557 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.245763 kubelet[2899]: E0123 01:11:19.245741 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.246509 kubelet[2899]: E0123 01:11:19.246305 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.246509 kubelet[2899]: W0123 01:11:19.246324 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.246509 kubelet[2899]: E0123 01:11:19.246340 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.247129 kubelet[2899]: E0123 01:11:19.246983 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.247129 kubelet[2899]: W0123 01:11:19.247050 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.247319 kubelet[2899]: E0123 01:11:19.247074 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.248614 kubelet[2899]: E0123 01:11:19.248594 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.248812 kubelet[2899]: W0123 01:11:19.248686 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.248812 kubelet[2899]: E0123 01:11:19.248707 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.299686 containerd[1589]: time="2026-01-23T01:11:19.299161496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7j9rd,Uid:2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:19.303605 systemd[1]: Started cri-containerd-f8d8f6b5c15e15563a4aa0c4fa67fafba10977d3b4b8caea4c6850b66b074922.scope - libcontainer container f8d8f6b5c15e15563a4aa0c4fa67fafba10977d3b4b8caea4c6850b66b074922. Jan 23 01:11:19.345596 kubelet[2899]: E0123 01:11:19.345427 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.346264 kubelet[2899]: W0123 01:11:19.345960 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.346264 kubelet[2899]: E0123 01:11:19.346004 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.347568 kubelet[2899]: E0123 01:11:19.347467 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.347568 kubelet[2899]: W0123 01:11:19.347487 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.348222 kubelet[2899]: E0123 01:11:19.347800 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.348679 kubelet[2899]: E0123 01:11:19.348657 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.348943 kubelet[2899]: W0123 01:11:19.348858 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.349949 kubelet[2899]: E0123 01:11:19.348885 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.350566 kubelet[2899]: E0123 01:11:19.350455 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.351009 kubelet[2899]: W0123 01:11:19.350970 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.351306 kubelet[2899]: E0123 01:11:19.351260 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.355595 kubelet[2899]: E0123 01:11:19.355373 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.355595 kubelet[2899]: W0123 01:11:19.355440 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.355595 kubelet[2899]: E0123 01:11:19.355460 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.356865 kubelet[2899]: E0123 01:11:19.356843 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.357147 kubelet[2899]: W0123 01:11:19.356975 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.357147 kubelet[2899]: E0123 01:11:19.357001 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.357670 kubelet[2899]: E0123 01:11:19.357431 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.357670 kubelet[2899]: W0123 01:11:19.357449 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.357670 kubelet[2899]: E0123 01:11:19.357466 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.358223 kubelet[2899]: E0123 01:11:19.358202 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.358656 kubelet[2899]: W0123 01:11:19.358410 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.358656 kubelet[2899]: E0123 01:11:19.358436 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.359703 kubelet[2899]: E0123 01:11:19.359581 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.360239 kubelet[2899]: W0123 01:11:19.359907 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.360239 kubelet[2899]: E0123 01:11:19.359935 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.360936 kubelet[2899]: E0123 01:11:19.360821 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.361489 kubelet[2899]: W0123 01:11:19.361204 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.361489 kubelet[2899]: E0123 01:11:19.361233 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.362367 kubelet[2899]: E0123 01:11:19.362336 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.363055 kubelet[2899]: W0123 01:11:19.362505 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.363055 kubelet[2899]: E0123 01:11:19.362528 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.363170 containerd[1589]: time="2026-01-23T01:11:19.362104788Z" level=info msg="connecting to shim 629454eea8ce3569a92b4c5090ff5299aea6f6c4e946d1515620000f853aa4e4" address="unix:///run/containerd/s/34ea0b45911532ff02e6adb01dbd32b73d10585da550edcdc218f59613f1a75f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:19.364058 kubelet[2899]: E0123 01:11:19.363802 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.364058 kubelet[2899]: W0123 01:11:19.363827 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.364058 kubelet[2899]: E0123 01:11:19.363844 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.364818 kubelet[2899]: E0123 01:11:19.364692 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.365341 kubelet[2899]: W0123 01:11:19.365070 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.365341 kubelet[2899]: E0123 01:11:19.365112 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.366139 kubelet[2899]: E0123 01:11:19.366117 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.366413 kubelet[2899]: W0123 01:11:19.366336 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.366413 kubelet[2899]: E0123 01:11:19.366364 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.367727 kubelet[2899]: E0123 01:11:19.367464 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.367727 kubelet[2899]: W0123 01:11:19.367485 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.367727 kubelet[2899]: E0123 01:11:19.367501 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.369426 kubelet[2899]: E0123 01:11:19.368468 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.369667 kubelet[2899]: W0123 01:11:19.368488 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.369667 kubelet[2899]: E0123 01:11:19.369540 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.369934 kubelet[2899]: E0123 01:11:19.369914 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.370181 kubelet[2899]: W0123 01:11:19.370030 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.370181 kubelet[2899]: E0123 01:11:19.370057 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.370513 kubelet[2899]: E0123 01:11:19.370492 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.370635 kubelet[2899]: W0123 01:11:19.370614 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.370844 kubelet[2899]: E0123 01:11:19.370820 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.371532 kubelet[2899]: E0123 01:11:19.371511 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.371947 kubelet[2899]: W0123 01:11:19.371705 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.371947 kubelet[2899]: E0123 01:11:19.371732 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.372698 kubelet[2899]: E0123 01:11:19.372663 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.373155 kubelet[2899]: W0123 01:11:19.372880 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.373155 kubelet[2899]: E0123 01:11:19.372904 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.373756 kubelet[2899]: E0123 01:11:19.373710 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.374087 kubelet[2899]: W0123 01:11:19.373973 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.374302 kubelet[2899]: E0123 01:11:19.374186 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.375639 kubelet[2899]: E0123 01:11:19.375482 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.375639 kubelet[2899]: W0123 01:11:19.375501 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.375639 kubelet[2899]: E0123 01:11:19.375518 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.376152 kubelet[2899]: E0123 01:11:19.375954 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.376152 kubelet[2899]: W0123 01:11:19.375995 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.376152 kubelet[2899]: E0123 01:11:19.376014 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.376907 kubelet[2899]: E0123 01:11:19.376771 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.376907 kubelet[2899]: W0123 01:11:19.376823 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.376907 kubelet[2899]: E0123 01:11:19.376842 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.377858 kubelet[2899]: E0123 01:11:19.377786 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.378121 kubelet[2899]: W0123 01:11:19.378033 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.378121 kubelet[2899]: E0123 01:11:19.378056 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.404036 kubelet[2899]: E0123 01:11:19.403891 2899 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:11:19.404036 kubelet[2899]: W0123 01:11:19.403950 2899 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:11:19.404036 kubelet[2899]: E0123 01:11:19.403978 2899 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:11:19.418752 systemd[1]: Started cri-containerd-629454eea8ce3569a92b4c5090ff5299aea6f6c4e946d1515620000f853aa4e4.scope - libcontainer container 629454eea8ce3569a92b4c5090ff5299aea6f6c4e946d1515620000f853aa4e4. Jan 23 01:11:19.478697 containerd[1589]: time="2026-01-23T01:11:19.478645629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55b9d8988c-jjscd,Uid:2e200eef-18ae-4255-863d-cdc5eafe5ab5,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8d8f6b5c15e15563a4aa0c4fa67fafba10977d3b4b8caea4c6850b66b074922\"" Jan 23 01:11:19.481013 containerd[1589]: time="2026-01-23T01:11:19.480959681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7j9rd,Uid:2ab3c867-17a3-4bdc-83dd-eabe7ca64f9e,Namespace:calico-system,Attempt:0,} returns sandbox id \"629454eea8ce3569a92b4c5090ff5299aea6f6c4e946d1515620000f853aa4e4\"" Jan 23 01:11:19.491628 containerd[1589]: time="2026-01-23T01:11:19.491595825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 01:11:20.619901 kubelet[2899]: E0123 01:11:20.619773 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:11:21.019911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3748225694.mount: Deactivated successfully. Jan 23 01:11:21.310681 containerd[1589]: time="2026-01-23T01:11:21.307357328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:21.310681 containerd[1589]: time="2026-01-23T01:11:21.309569028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 23 01:11:21.316030 containerd[1589]: time="2026-01-23T01:11:21.315967002Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:21.318070 containerd[1589]: time="2026-01-23T01:11:21.318025448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.825098088s" Jan 23 01:11:21.318172 containerd[1589]: time="2026-01-23T01:11:21.318073217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 01:11:21.321233 containerd[1589]: time="2026-01-23T01:11:21.320854632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:21.327418 containerd[1589]: time="2026-01-23T01:11:21.326561369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 01:11:21.343410 containerd[1589]: time="2026-01-23T01:11:21.342984351Z" level=info msg="CreateContainer within sandbox \"629454eea8ce3569a92b4c5090ff5299aea6f6c4e946d1515620000f853aa4e4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 01:11:21.366417 containerd[1589]: time="2026-01-23T01:11:21.365022554Z" level=info msg="Container ef9a969b584524133c3a0fb0b02a7389f88c42ca51c4984d5327a6cbe8d7058d: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:21.410887 containerd[1589]: time="2026-01-23T01:11:21.410800040Z" level=info msg="CreateContainer within sandbox \"629454eea8ce3569a92b4c5090ff5299aea6f6c4e946d1515620000f853aa4e4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ef9a969b584524133c3a0fb0b02a7389f88c42ca51c4984d5327a6cbe8d7058d\"" Jan 23 01:11:21.412173 containerd[1589]: time="2026-01-23T01:11:21.412074196Z" level=info msg="StartContainer for \"ef9a969b584524133c3a0fb0b02a7389f88c42ca51c4984d5327a6cbe8d7058d\"" Jan 23 01:11:21.416408 containerd[1589]: time="2026-01-23T01:11:21.415364469Z" level=info msg="connecting to shim ef9a969b584524133c3a0fb0b02a7389f88c42ca51c4984d5327a6cbe8d7058d" address="unix:///run/containerd/s/34ea0b45911532ff02e6adb01dbd32b73d10585da550edcdc218f59613f1a75f" protocol=ttrpc version=3 Jan 23 01:11:21.478642 systemd[1]: Started cri-containerd-ef9a969b584524133c3a0fb0b02a7389f88c42ca51c4984d5327a6cbe8d7058d.scope - libcontainer container ef9a969b584524133c3a0fb0b02a7389f88c42ca51c4984d5327a6cbe8d7058d. Jan 23 01:11:21.785207 containerd[1589]: time="2026-01-23T01:11:21.785158983Z" level=info msg="StartContainer for \"ef9a969b584524133c3a0fb0b02a7389f88c42ca51c4984d5327a6cbe8d7058d\" returns successfully" Jan 23 01:11:21.805620 systemd[1]: cri-containerd-ef9a969b584524133c3a0fb0b02a7389f88c42ca51c4984d5327a6cbe8d7058d.scope: Deactivated successfully. Jan 23 01:11:21.828438 containerd[1589]: time="2026-01-23T01:11:21.828269420Z" level=info msg="received container exit event container_id:\"ef9a969b584524133c3a0fb0b02a7389f88c42ca51c4984d5327a6cbe8d7058d\" id:\"ef9a969b584524133c3a0fb0b02a7389f88c42ca51c4984d5327a6cbe8d7058d\" pid:3543 exited_at:{seconds:1769130681 nanos:811117533}" Jan 23 01:11:21.903130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef9a969b584524133c3a0fb0b02a7389f88c42ca51c4984d5327a6cbe8d7058d-rootfs.mount: Deactivated successfully. Jan 23 01:11:22.621255 kubelet[2899]: E0123 01:11:22.621157 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:11:24.386653 containerd[1589]: time="2026-01-23T01:11:24.386550082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:24.388158 containerd[1589]: time="2026-01-23T01:11:24.387888474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Jan 23 01:11:24.389345 containerd[1589]: time="2026-01-23T01:11:24.389255475Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:24.392280 containerd[1589]: time="2026-01-23T01:11:24.392247206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:24.393168 containerd[1589]: time="2026-01-23T01:11:24.393121031Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.066506492s" Jan 23 01:11:24.393254 containerd[1589]: time="2026-01-23T01:11:24.393173450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 01:11:24.396076 containerd[1589]: time="2026-01-23T01:11:24.395599541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 01:11:24.417114 containerd[1589]: time="2026-01-23T01:11:24.417062604Z" level=info msg="CreateContainer within sandbox \"f8d8f6b5c15e15563a4aa0c4fa67fafba10977d3b4b8caea4c6850b66b074922\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 01:11:24.432558 containerd[1589]: time="2026-01-23T01:11:24.431581599Z" level=info msg="Container cd59b3d8162e307f8ef717222a174ddcdbc64830e73faaa6f054572a0322991c: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:24.457276 containerd[1589]: time="2026-01-23T01:11:24.457225323Z" level=info msg="CreateContainer within sandbox \"f8d8f6b5c15e15563a4aa0c4fa67fafba10977d3b4b8caea4c6850b66b074922\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cd59b3d8162e307f8ef717222a174ddcdbc64830e73faaa6f054572a0322991c\"" Jan 23 01:11:24.459705 containerd[1589]: time="2026-01-23T01:11:24.459673360Z" level=info msg="StartContainer for \"cd59b3d8162e307f8ef717222a174ddcdbc64830e73faaa6f054572a0322991c\"" Jan 23 01:11:24.461282 containerd[1589]: time="2026-01-23T01:11:24.461249122Z" level=info msg="connecting to shim cd59b3d8162e307f8ef717222a174ddcdbc64830e73faaa6f054572a0322991c" address="unix:///run/containerd/s/513791f167f7f756f11b056807e1029b08010bacda2e14475b228263da023027" protocol=ttrpc version=3 Jan 23 01:11:24.502808 systemd[1]: Started cri-containerd-cd59b3d8162e307f8ef717222a174ddcdbc64830e73faaa6f054572a0322991c.scope - libcontainer container cd59b3d8162e307f8ef717222a174ddcdbc64830e73faaa6f054572a0322991c. Jan 23 01:11:24.589275 containerd[1589]: time="2026-01-23T01:11:24.589227838Z" level=info msg="StartContainer for \"cd59b3d8162e307f8ef717222a174ddcdbc64830e73faaa6f054572a0322991c\" returns successfully" Jan 23 01:11:24.624448 kubelet[2899]: E0123 01:11:24.623085 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:11:25.870912 kubelet[2899]: I0123 01:11:25.867430 2899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55b9d8988c-jjscd" podStartSLOduration=2.964977411 podStartE2EDuration="7.866807449s" podCreationTimestamp="2026-01-23 01:11:18 +0000 UTC" firstStartedPulling="2026-01-23 01:11:19.493295644 +0000 UTC m=+32.141883912" lastFinishedPulling="2026-01-23 01:11:24.395125691 +0000 UTC m=+37.043713950" observedRunningTime="2026-01-23 01:11:24.879605183 +0000 UTC m=+37.528193468" watchObservedRunningTime="2026-01-23 01:11:25.866807449 +0000 UTC m=+38.515395709" Jan 23 01:11:26.618991 kubelet[2899]: E0123 01:11:26.618909 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:11:28.619412 kubelet[2899]: E0123 01:11:28.618895 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:11:30.619319 kubelet[2899]: E0123 01:11:30.619219 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:11:30.962579 containerd[1589]: time="2026-01-23T01:11:30.962374242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:30.964898 containerd[1589]: time="2026-01-23T01:11:30.964852654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 01:11:30.969349 containerd[1589]: time="2026-01-23T01:11:30.969057680Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:30.973012 containerd[1589]: time="2026-01-23T01:11:30.972967454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:30.974286 containerd[1589]: time="2026-01-23T01:11:30.973956743Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.575975633s" Jan 23 01:11:30.974286 containerd[1589]: time="2026-01-23T01:11:30.973994666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 01:11:30.978605 containerd[1589]: time="2026-01-23T01:11:30.978538718Z" level=info msg="CreateContainer within sandbox \"629454eea8ce3569a92b4c5090ff5299aea6f6c4e946d1515620000f853aa4e4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 01:11:31.008425 containerd[1589]: time="2026-01-23T01:11:31.006665648Z" level=info msg="Container 3f83c7f663a3fe2f28cd7cda8080f72be42fb90173344afa99c2d4bf6afbea46: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:31.012836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241212664.mount: Deactivated successfully. Jan 23 01:11:31.045132 containerd[1589]: time="2026-01-23T01:11:31.045058449Z" level=info msg="CreateContainer within sandbox \"629454eea8ce3569a92b4c5090ff5299aea6f6c4e946d1515620000f853aa4e4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3f83c7f663a3fe2f28cd7cda8080f72be42fb90173344afa99c2d4bf6afbea46\"" Jan 23 01:11:31.046317 containerd[1589]: time="2026-01-23T01:11:31.046287349Z" level=info msg="StartContainer for \"3f83c7f663a3fe2f28cd7cda8080f72be42fb90173344afa99c2d4bf6afbea46\"" Jan 23 01:11:31.048525 containerd[1589]: time="2026-01-23T01:11:31.048492462Z" level=info msg="connecting to shim 3f83c7f663a3fe2f28cd7cda8080f72be42fb90173344afa99c2d4bf6afbea46" address="unix:///run/containerd/s/34ea0b45911532ff02e6adb01dbd32b73d10585da550edcdc218f59613f1a75f" protocol=ttrpc version=3 Jan 23 01:11:31.088663 systemd[1]: Started cri-containerd-3f83c7f663a3fe2f28cd7cda8080f72be42fb90173344afa99c2d4bf6afbea46.scope - libcontainer container 3f83c7f663a3fe2f28cd7cda8080f72be42fb90173344afa99c2d4bf6afbea46. Jan 23 01:11:31.235942 containerd[1589]: time="2026-01-23T01:11:31.235774429Z" level=info msg="StartContainer for \"3f83c7f663a3fe2f28cd7cda8080f72be42fb90173344afa99c2d4bf6afbea46\" returns successfully" Jan 23 01:11:32.228868 systemd[1]: cri-containerd-3f83c7f663a3fe2f28cd7cda8080f72be42fb90173344afa99c2d4bf6afbea46.scope: Deactivated successfully. Jan 23 01:11:32.229343 systemd[1]: cri-containerd-3f83c7f663a3fe2f28cd7cda8080f72be42fb90173344afa99c2d4bf6afbea46.scope: Consumed 780ms CPU time, 162.5M memory peak, 6.8M read from disk, 171.3M written to disk. Jan 23 01:11:32.320238 containerd[1589]: time="2026-01-23T01:11:32.319973702Z" level=info msg="received container exit event container_id:\"3f83c7f663a3fe2f28cd7cda8080f72be42fb90173344afa99c2d4bf6afbea46\" id:\"3f83c7f663a3fe2f28cd7cda8080f72be42fb90173344afa99c2d4bf6afbea46\" pid:3649 exited_at:{seconds:1769130692 nanos:319614807}" Jan 23 01:11:32.335952 kubelet[2899]: I0123 01:11:32.333955 2899 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 01:11:32.416596 systemd[1]: Created slice kubepods-burstable-podf4603d7b_3b99_4c95_a909_967677b55cd1.slice - libcontainer container kubepods-burstable-podf4603d7b_3b99_4c95_a909_967677b55cd1.slice. Jan 23 01:11:32.447261 systemd[1]: Created slice kubepods-burstable-pod36b120e1_773f_44d0_abdd_d8ef5044f795.slice - libcontainer container kubepods-burstable-pod36b120e1_773f_44d0_abdd_d8ef5044f795.slice. Jan 23 01:11:32.461413 kubelet[2899]: I0123 01:11:32.460717 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxcp\" (UniqueName: \"kubernetes.io/projected/bc145d36-eea8-4680-ac11-0b79793cc035-kube-api-access-4wxcp\") pod \"calico-kube-controllers-55994859c6-2x5qp\" (UID: \"bc145d36-eea8-4680-ac11-0b79793cc035\") " pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" Jan 23 01:11:32.461413 kubelet[2899]: I0123 01:11:32.460795 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgg6m\" (UniqueName: \"kubernetes.io/projected/36b120e1-773f-44d0-abdd-d8ef5044f795-kube-api-access-lgg6m\") pod \"coredns-66bc5c9577-rpxb6\" (UID: \"36b120e1-773f-44d0-abdd-d8ef5044f795\") " pod="kube-system/coredns-66bc5c9577-rpxb6" Jan 23 01:11:32.461413 kubelet[2899]: I0123 01:11:32.460839 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ddk4\" (UniqueName: \"kubernetes.io/projected/f4603d7b-3b99-4c95-a909-967677b55cd1-kube-api-access-9ddk4\") pod \"coredns-66bc5c9577-tz2wv\" (UID: \"f4603d7b-3b99-4c95-a909-967677b55cd1\") " pod="kube-system/coredns-66bc5c9577-tz2wv" Jan 23 01:11:32.461413 kubelet[2899]: I0123 01:11:32.460882 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4603d7b-3b99-4c95-a909-967677b55cd1-config-volume\") pod \"coredns-66bc5c9577-tz2wv\" (UID: \"f4603d7b-3b99-4c95-a909-967677b55cd1\") " pod="kube-system/coredns-66bc5c9577-tz2wv" Jan 23 01:11:32.461413 kubelet[2899]: I0123 01:11:32.460920 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0436e42-d053-4203-971f-0d3de78e1ec3-whisker-backend-key-pair\") pod \"whisker-6bccfcb4d5-9cdwm\" (UID: \"e0436e42-d053-4203-971f-0d3de78e1ec3\") " pod="calico-system/whisker-6bccfcb4d5-9cdwm" Jan 23 01:11:32.461801 kubelet[2899]: I0123 01:11:32.460966 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl2td\" (UniqueName: \"kubernetes.io/projected/e0436e42-d053-4203-971f-0d3de78e1ec3-kube-api-access-tl2td\") pod \"whisker-6bccfcb4d5-9cdwm\" (UID: \"e0436e42-d053-4203-971f-0d3de78e1ec3\") " pod="calico-system/whisker-6bccfcb4d5-9cdwm" Jan 23 01:11:32.461801 kubelet[2899]: I0123 01:11:32.461088 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0436e42-d053-4203-971f-0d3de78e1ec3-whisker-ca-bundle\") pod \"whisker-6bccfcb4d5-9cdwm\" (UID: \"e0436e42-d053-4203-971f-0d3de78e1ec3\") " pod="calico-system/whisker-6bccfcb4d5-9cdwm" Jan 23 01:11:32.461801 kubelet[2899]: I0123 01:11:32.461124 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36b120e1-773f-44d0-abdd-d8ef5044f795-config-volume\") pod \"coredns-66bc5c9577-rpxb6\" (UID: \"36b120e1-773f-44d0-abdd-d8ef5044f795\") " pod="kube-system/coredns-66bc5c9577-rpxb6" Jan 23 01:11:32.461801 kubelet[2899]: I0123 01:11:32.461167 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc145d36-eea8-4680-ac11-0b79793cc035-tigera-ca-bundle\") pod \"calico-kube-controllers-55994859c6-2x5qp\" (UID: \"bc145d36-eea8-4680-ac11-0b79793cc035\") " pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" Jan 23 01:11:32.483665 systemd[1]: Created slice kubepods-besteffort-pode0436e42_d053_4203_971f_0d3de78e1ec3.slice - libcontainer container kubepods-besteffort-pode0436e42_d053_4203_971f_0d3de78e1ec3.slice. Jan 23 01:11:32.506632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f83c7f663a3fe2f28cd7cda8080f72be42fb90173344afa99c2d4bf6afbea46-rootfs.mount: Deactivated successfully. Jan 23 01:11:32.517323 systemd[1]: Created slice kubepods-besteffort-podbc145d36_eea8_4680_ac11_0b79793cc035.slice - libcontainer container kubepods-besteffort-podbc145d36_eea8_4680_ac11_0b79793cc035.slice. Jan 23 01:11:32.533340 systemd[1]: Created slice kubepods-besteffort-podda5b2d2c_13cd_4988_8a1e_436e3c779260.slice - libcontainer container kubepods-besteffort-podda5b2d2c_13cd_4988_8a1e_436e3c779260.slice. Jan 23 01:11:32.548341 systemd[1]: Created slice kubepods-besteffort-pod236b218f_d8af_4e9e_b6b6_8f9ea312a2ce.slice - libcontainer container kubepods-besteffort-pod236b218f_d8af_4e9e_b6b6_8f9ea312a2ce.slice. Jan 23 01:11:32.607700 kubelet[2899]: I0123 01:11:32.563377 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6ae36994-0284-456d-8619-5a1f2ff25c95-goldmane-key-pair\") pod \"goldmane-7c778bb748-f4t5v\" (UID: \"6ae36994-0284-456d-8619-5a1f2ff25c95\") " pod="calico-system/goldmane-7c778bb748-f4t5v" Jan 23 01:11:32.607700 kubelet[2899]: I0123 01:11:32.563507 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqlwf\" (UniqueName: \"kubernetes.io/projected/6ae36994-0284-456d-8619-5a1f2ff25c95-kube-api-access-sqlwf\") pod \"goldmane-7c778bb748-f4t5v\" (UID: \"6ae36994-0284-456d-8619-5a1f2ff25c95\") " pod="calico-system/goldmane-7c778bb748-f4t5v" Jan 23 01:11:32.607700 kubelet[2899]: I0123 01:11:32.563578 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbwf7\" (UniqueName: \"kubernetes.io/projected/236b218f-d8af-4e9e-b6b6-8f9ea312a2ce-kube-api-access-jbwf7\") pod \"calico-apiserver-57f7549777-v6lv7\" (UID: \"236b218f-d8af-4e9e-b6b6-8f9ea312a2ce\") " pod="calico-apiserver/calico-apiserver-57f7549777-v6lv7" Jan 23 01:11:32.607700 kubelet[2899]: I0123 01:11:32.563740 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ae36994-0284-456d-8619-5a1f2ff25c95-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-f4t5v\" (UID: \"6ae36994-0284-456d-8619-5a1f2ff25c95\") " pod="calico-system/goldmane-7c778bb748-f4t5v" Jan 23 01:11:32.607700 kubelet[2899]: I0123 01:11:32.563957 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/236b218f-d8af-4e9e-b6b6-8f9ea312a2ce-calico-apiserver-certs\") pod \"calico-apiserver-57f7549777-v6lv7\" (UID: \"236b218f-d8af-4e9e-b6b6-8f9ea312a2ce\") " pod="calico-apiserver/calico-apiserver-57f7549777-v6lv7" Jan 23 01:11:32.563116 systemd[1]: Created slice kubepods-besteffort-pod6ae36994_0284_456d_8619_5a1f2ff25c95.slice - libcontainer container kubepods-besteffort-pod6ae36994_0284_456d_8619_5a1f2ff25c95.slice. Jan 23 01:11:32.608141 kubelet[2899]: I0123 01:11:32.564055 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/da5b2d2c-13cd-4988-8a1e-436e3c779260-calico-apiserver-certs\") pod \"calico-apiserver-769444c77-774wh\" (UID: \"da5b2d2c-13cd-4988-8a1e-436e3c779260\") " pod="calico-apiserver/calico-apiserver-769444c77-774wh" Jan 23 01:11:32.608141 kubelet[2899]: I0123 01:11:32.564116 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d38a34ac-d16c-44a2-b363-28d164fb855d-calico-apiserver-certs\") pod \"calico-apiserver-769444c77-6h5s4\" (UID: \"d38a34ac-d16c-44a2-b363-28d164fb855d\") " pod="calico-apiserver/calico-apiserver-769444c77-6h5s4" Jan 23 01:11:32.608141 kubelet[2899]: I0123 01:11:32.564216 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwqj5\" (UniqueName: \"kubernetes.io/projected/da5b2d2c-13cd-4988-8a1e-436e3c779260-kube-api-access-gwqj5\") pod \"calico-apiserver-769444c77-774wh\" (UID: \"da5b2d2c-13cd-4988-8a1e-436e3c779260\") " pod="calico-apiserver/calico-apiserver-769444c77-774wh" Jan 23 01:11:32.608141 kubelet[2899]: I0123 01:11:32.564270 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s94hx\" (UniqueName: \"kubernetes.io/projected/d38a34ac-d16c-44a2-b363-28d164fb855d-kube-api-access-s94hx\") pod \"calico-apiserver-769444c77-6h5s4\" (UID: \"d38a34ac-d16c-44a2-b363-28d164fb855d\") " pod="calico-apiserver/calico-apiserver-769444c77-6h5s4" Jan 23 01:11:32.608141 kubelet[2899]: I0123 01:11:32.564460 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae36994-0284-456d-8619-5a1f2ff25c95-config\") pod \"goldmane-7c778bb748-f4t5v\" (UID: \"6ae36994-0284-456d-8619-5a1f2ff25c95\") " pod="calico-system/goldmane-7c778bb748-f4t5v" Jan 23 01:11:32.579850 systemd[1]: Created slice kubepods-besteffort-podd38a34ac_d16c_44a2_b363_28d164fb855d.slice - libcontainer container kubepods-besteffort-podd38a34ac_d16c_44a2_b363_28d164fb855d.slice. Jan 23 01:11:32.697957 systemd[1]: Created slice kubepods-besteffort-podac789593_88de_4afb_9cdb_f9323fe8cb8a.slice - libcontainer container kubepods-besteffort-podac789593_88de_4afb_9cdb_f9323fe8cb8a.slice. Jan 23 01:11:32.716245 containerd[1589]: time="2026-01-23T01:11:32.716186070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2q95q,Uid:ac789593-88de-4afb-9cdb-f9323fe8cb8a,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:32.740341 containerd[1589]: time="2026-01-23T01:11:32.740148867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tz2wv,Uid:f4603d7b-3b99-4c95-a909-967677b55cd1,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:32.802269 containerd[1589]: time="2026-01-23T01:11:32.801678162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rpxb6,Uid:36b120e1-773f-44d0-abdd-d8ef5044f795,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:32.829432 containerd[1589]: time="2026-01-23T01:11:32.828478414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bccfcb4d5-9cdwm,Uid:e0436e42-d053-4203-971f-0d3de78e1ec3,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:32.839823 containerd[1589]: time="2026-01-23T01:11:32.839766302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55994859c6-2x5qp,Uid:bc145d36-eea8-4680-ac11-0b79793cc035,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:32.911233 containerd[1589]: time="2026-01-23T01:11:32.911165447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 01:11:32.961446 containerd[1589]: time="2026-01-23T01:11:32.960640765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769444c77-774wh,Uid:da5b2d2c-13cd-4988-8a1e-436e3c779260,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:11:32.963368 containerd[1589]: time="2026-01-23T01:11:32.963317191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f7549777-v6lv7,Uid:236b218f-d8af-4e9e-b6b6-8f9ea312a2ce,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:11:32.964675 containerd[1589]: time="2026-01-23T01:11:32.964625204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f4t5v,Uid:6ae36994-0284-456d-8619-5a1f2ff25c95,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:32.966552 containerd[1589]: time="2026-01-23T01:11:32.966513612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769444c77-6h5s4,Uid:d38a34ac-d16c-44a2-b363-28d164fb855d,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:11:33.205274 containerd[1589]: time="2026-01-23T01:11:33.205198110Z" level=error msg="Failed to destroy network for sandbox \"5b22962b548136f56086afe4656dee7229d983cd0f173601e1fb25ef2554c59f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.211365 containerd[1589]: time="2026-01-23T01:11:33.211058526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bccfcb4d5-9cdwm,Uid:e0436e42-d053-4203-971f-0d3de78e1ec3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b22962b548136f56086afe4656dee7229d983cd0f173601e1fb25ef2554c59f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.213259 kubelet[2899]: E0123 01:11:33.213109 2899 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b22962b548136f56086afe4656dee7229d983cd0f173601e1fb25ef2554c59f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.213779 kubelet[2899]: E0123 01:11:33.213701 2899 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b22962b548136f56086afe4656dee7229d983cd0f173601e1fb25ef2554c59f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bccfcb4d5-9cdwm" Jan 23 01:11:33.214093 kubelet[2899]: E0123 01:11:33.213906 2899 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b22962b548136f56086afe4656dee7229d983cd0f173601e1fb25ef2554c59f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bccfcb4d5-9cdwm" Jan 23 01:11:33.216197 kubelet[2899]: E0123 01:11:33.214339 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bccfcb4d5-9cdwm_calico-system(e0436e42-d053-4203-971f-0d3de78e1ec3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bccfcb4d5-9cdwm_calico-system(e0436e42-d053-4203-971f-0d3de78e1ec3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b22962b548136f56086afe4656dee7229d983cd0f173601e1fb25ef2554c59f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bccfcb4d5-9cdwm" podUID="e0436e42-d053-4203-971f-0d3de78e1ec3" Jan 23 01:11:33.249374 containerd[1589]: time="2026-01-23T01:11:33.249301706Z" level=error msg="Failed to destroy network for sandbox \"2d54c21a70a903a5be8b0ed81f9c5d56533bb1e0a10cbd9462a394a9cdf2888e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.252318 containerd[1589]: time="2026-01-23T01:11:33.251858655Z" level=error msg="Failed to destroy network for sandbox \"40690a0c376f7b4882fa82a2e5ee8ea13a76f79d7efd2f1749ec656905e26f77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.257905 containerd[1589]: time="2026-01-23T01:11:33.257854376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2q95q,Uid:ac789593-88de-4afb-9cdb-f9323fe8cb8a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d54c21a70a903a5be8b0ed81f9c5d56533bb1e0a10cbd9462a394a9cdf2888e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.259529 containerd[1589]: time="2026-01-23T01:11:33.259488853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tz2wv,Uid:f4603d7b-3b99-4c95-a909-967677b55cd1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40690a0c376f7b4882fa82a2e5ee8ea13a76f79d7efd2f1749ec656905e26f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.261073 kubelet[2899]: E0123 01:11:33.260251 2899 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40690a0c376f7b4882fa82a2e5ee8ea13a76f79d7efd2f1749ec656905e26f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.261073 kubelet[2899]: E0123 01:11:33.260269 2899 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d54c21a70a903a5be8b0ed81f9c5d56533bb1e0a10cbd9462a394a9cdf2888e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.261073 kubelet[2899]: E0123 01:11:33.260344 2899 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40690a0c376f7b4882fa82a2e5ee8ea13a76f79d7efd2f1749ec656905e26f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tz2wv" Jan 23 01:11:33.261073 kubelet[2899]: E0123 01:11:33.260381 2899 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d54c21a70a903a5be8b0ed81f9c5d56533bb1e0a10cbd9462a394a9cdf2888e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2q95q" Jan 23 01:11:33.261338 kubelet[2899]: E0123 01:11:33.260453 2899 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d54c21a70a903a5be8b0ed81f9c5d56533bb1e0a10cbd9462a394a9cdf2888e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2q95q" Jan 23 01:11:33.261338 kubelet[2899]: E0123 01:11:33.260547 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2q95q_calico-system(ac789593-88de-4afb-9cdb-f9323fe8cb8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2q95q_calico-system(ac789593-88de-4afb-9cdb-f9323fe8cb8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d54c21a70a903a5be8b0ed81f9c5d56533bb1e0a10cbd9462a394a9cdf2888e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:11:33.261828 kubelet[2899]: E0123 01:11:33.260395 2899 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40690a0c376f7b4882fa82a2e5ee8ea13a76f79d7efd2f1749ec656905e26f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tz2wv" Jan 23 01:11:33.261828 kubelet[2899]: E0123 01:11:33.261556 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-tz2wv_kube-system(f4603d7b-3b99-4c95-a909-967677b55cd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-tz2wv_kube-system(f4603d7b-3b99-4c95-a909-967677b55cd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40690a0c376f7b4882fa82a2e5ee8ea13a76f79d7efd2f1749ec656905e26f77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tz2wv" podUID="f4603d7b-3b99-4c95-a909-967677b55cd1" Jan 23 01:11:33.278962 containerd[1589]: time="2026-01-23T01:11:33.278641789Z" level=error msg="Failed to destroy network for sandbox \"167e46448b876d3388a2f338ec7df244658211bcfae6061ae7d63eb06d68e79c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.292856 containerd[1589]: time="2026-01-23T01:11:33.292511198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55994859c6-2x5qp,Uid:bc145d36-eea8-4680-ac11-0b79793cc035,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"167e46448b876d3388a2f338ec7df244658211bcfae6061ae7d63eb06d68e79c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.293836 kubelet[2899]: E0123 01:11:33.292998 2899 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"167e46448b876d3388a2f338ec7df244658211bcfae6061ae7d63eb06d68e79c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.293836 kubelet[2899]: E0123 01:11:33.293077 2899 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"167e46448b876d3388a2f338ec7df244658211bcfae6061ae7d63eb06d68e79c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" Jan 23 01:11:33.293836 kubelet[2899]: E0123 01:11:33.293106 2899 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"167e46448b876d3388a2f338ec7df244658211bcfae6061ae7d63eb06d68e79c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" Jan 23 01:11:33.294034 kubelet[2899]: E0123 01:11:33.293187 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55994859c6-2x5qp_calico-system(bc145d36-eea8-4680-ac11-0b79793cc035)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55994859c6-2x5qp_calico-system(bc145d36-eea8-4680-ac11-0b79793cc035)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"167e46448b876d3388a2f338ec7df244658211bcfae6061ae7d63eb06d68e79c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" podUID="bc145d36-eea8-4680-ac11-0b79793cc035" Jan 23 01:11:33.303055 containerd[1589]: time="2026-01-23T01:11:33.302997495Z" level=error msg="Failed to destroy network for sandbox \"05c9185877bf7f64fae13ed5e0fb483ed8e4bea91ba88aba9266d521ae3ef3a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.306112 containerd[1589]: time="2026-01-23T01:11:33.306051831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rpxb6,Uid:36b120e1-773f-44d0-abdd-d8ef5044f795,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05c9185877bf7f64fae13ed5e0fb483ed8e4bea91ba88aba9266d521ae3ef3a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.306934 kubelet[2899]: E0123 01:11:33.306438 2899 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05c9185877bf7f64fae13ed5e0fb483ed8e4bea91ba88aba9266d521ae3ef3a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.306934 kubelet[2899]: E0123 01:11:33.306511 2899 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05c9185877bf7f64fae13ed5e0fb483ed8e4bea91ba88aba9266d521ae3ef3a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rpxb6" Jan 23 01:11:33.306934 kubelet[2899]: E0123 01:11:33.306558 2899 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05c9185877bf7f64fae13ed5e0fb483ed8e4bea91ba88aba9266d521ae3ef3a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rpxb6" Jan 23 01:11:33.307180 kubelet[2899]: E0123 01:11:33.306635 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-rpxb6_kube-system(36b120e1-773f-44d0-abdd-d8ef5044f795)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-rpxb6_kube-system(36b120e1-773f-44d0-abdd-d8ef5044f795)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05c9185877bf7f64fae13ed5e0fb483ed8e4bea91ba88aba9266d521ae3ef3a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-rpxb6" podUID="36b120e1-773f-44d0-abdd-d8ef5044f795" Jan 23 01:11:33.323679 containerd[1589]: time="2026-01-23T01:11:33.323617425Z" level=error msg="Failed to destroy network for sandbox \"c6dbc523fb31a77ccb9baacbf004975e104205cb71bd29c34c817c071237583f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.328913 containerd[1589]: time="2026-01-23T01:11:33.328768970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f7549777-v6lv7,Uid:236b218f-d8af-4e9e-b6b6-8f9ea312a2ce,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6dbc523fb31a77ccb9baacbf004975e104205cb71bd29c34c817c071237583f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.329240 kubelet[2899]: E0123 01:11:33.329177 2899 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6dbc523fb31a77ccb9baacbf004975e104205cb71bd29c34c817c071237583f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.329373 kubelet[2899]: E0123 01:11:33.329261 2899 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6dbc523fb31a77ccb9baacbf004975e104205cb71bd29c34c817c071237583f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f7549777-v6lv7" Jan 23 01:11:33.329373 kubelet[2899]: E0123 01:11:33.329292 2899 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6dbc523fb31a77ccb9baacbf004975e104205cb71bd29c34c817c071237583f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f7549777-v6lv7" Jan 23 01:11:33.330759 kubelet[2899]: E0123 01:11:33.330560 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57f7549777-v6lv7_calico-apiserver(236b218f-d8af-4e9e-b6b6-8f9ea312a2ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57f7549777-v6lv7_calico-apiserver(236b218f-d8af-4e9e-b6b6-8f9ea312a2ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6dbc523fb31a77ccb9baacbf004975e104205cb71bd29c34c817c071237583f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57f7549777-v6lv7" podUID="236b218f-d8af-4e9e-b6b6-8f9ea312a2ce" Jan 23 01:11:33.362631 containerd[1589]: time="2026-01-23T01:11:33.362568835Z" level=error msg="Failed to destroy network for sandbox \"982061a8c04c2feb481d518588a9c5d347c045dc9f69ae4ed7145558036fc1ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.366937 containerd[1589]: time="2026-01-23T01:11:33.366880924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f4t5v,Uid:6ae36994-0284-456d-8619-5a1f2ff25c95,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"982061a8c04c2feb481d518588a9c5d347c045dc9f69ae4ed7145558036fc1ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.367754 kubelet[2899]: E0123 01:11:33.367320 2899 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"982061a8c04c2feb481d518588a9c5d347c045dc9f69ae4ed7145558036fc1ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.367754 kubelet[2899]: E0123 01:11:33.367717 2899 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"982061a8c04c2feb481d518588a9c5d347c045dc9f69ae4ed7145558036fc1ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-f4t5v" Jan 23 01:11:33.368278 kubelet[2899]: E0123 01:11:33.367788 2899 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"982061a8c04c2feb481d518588a9c5d347c045dc9f69ae4ed7145558036fc1ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-f4t5v" Jan 23 01:11:33.368278 kubelet[2899]: E0123 01:11:33.367949 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-f4t5v_calico-system(6ae36994-0284-456d-8619-5a1f2ff25c95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-f4t5v_calico-system(6ae36994-0284-456d-8619-5a1f2ff25c95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"982061a8c04c2feb481d518588a9c5d347c045dc9f69ae4ed7145558036fc1ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-f4t5v" podUID="6ae36994-0284-456d-8619-5a1f2ff25c95" Jan 23 01:11:33.382630 containerd[1589]: time="2026-01-23T01:11:33.382570590Z" level=error msg="Failed to destroy network for sandbox \"57e5aef91a635a9231edcbf3d4e9e925035ee8dd276a864283ced3a35b7f8482\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.383822 containerd[1589]: time="2026-01-23T01:11:33.383752583Z" level=error msg="Failed to destroy network for sandbox \"f358a4c838b7b65fc7cc5b2d8425a7b8b475b71a0213dceda6ca4ea4ab2feed9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.386722 containerd[1589]: time="2026-01-23T01:11:33.386578080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769444c77-774wh,Uid:da5b2d2c-13cd-4988-8a1e-436e3c779260,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57e5aef91a635a9231edcbf3d4e9e925035ee8dd276a864283ced3a35b7f8482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.387027 kubelet[2899]: E0123 01:11:33.386897 2899 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57e5aef91a635a9231edcbf3d4e9e925035ee8dd276a864283ced3a35b7f8482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.387150 kubelet[2899]: E0123 01:11:33.387079 2899 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57e5aef91a635a9231edcbf3d4e9e925035ee8dd276a864283ced3a35b7f8482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769444c77-774wh" Jan 23 01:11:33.387251 kubelet[2899]: E0123 01:11:33.387209 2899 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57e5aef91a635a9231edcbf3d4e9e925035ee8dd276a864283ced3a35b7f8482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769444c77-774wh" Jan 23 01:11:33.387746 kubelet[2899]: E0123 01:11:33.387460 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769444c77-774wh_calico-apiserver(da5b2d2c-13cd-4988-8a1e-436e3c779260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769444c77-774wh_calico-apiserver(da5b2d2c-13cd-4988-8a1e-436e3c779260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57e5aef91a635a9231edcbf3d4e9e925035ee8dd276a864283ced3a35b7f8482\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769444c77-774wh" podUID="da5b2d2c-13cd-4988-8a1e-436e3c779260" Jan 23 01:11:33.387862 containerd[1589]: time="2026-01-23T01:11:33.387822913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769444c77-6h5s4,Uid:d38a34ac-d16c-44a2-b363-28d164fb855d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f358a4c838b7b65fc7cc5b2d8425a7b8b475b71a0213dceda6ca4ea4ab2feed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.388282 kubelet[2899]: E0123 01:11:33.388029 2899 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f358a4c838b7b65fc7cc5b2d8425a7b8b475b71a0213dceda6ca4ea4ab2feed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:33.388282 kubelet[2899]: E0123 01:11:33.388077 2899 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f358a4c838b7b65fc7cc5b2d8425a7b8b475b71a0213dceda6ca4ea4ab2feed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769444c77-6h5s4" Jan 23 01:11:33.388282 kubelet[2899]: E0123 01:11:33.388101 2899 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f358a4c838b7b65fc7cc5b2d8425a7b8b475b71a0213dceda6ca4ea4ab2feed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769444c77-6h5s4" Jan 23 01:11:33.388454 kubelet[2899]: E0123 01:11:33.388149 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769444c77-6h5s4_calico-apiserver(d38a34ac-d16c-44a2-b363-28d164fb855d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769444c77-6h5s4_calico-apiserver(d38a34ac-d16c-44a2-b363-28d164fb855d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f358a4c838b7b65fc7cc5b2d8425a7b8b475b71a0213dceda6ca4ea4ab2feed9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769444c77-6h5s4" podUID="d38a34ac-d16c-44a2-b363-28d164fb855d" Jan 23 01:11:43.621475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966936505.mount: Deactivated successfully. Jan 23 01:11:43.711412 containerd[1589]: time="2026-01-23T01:11:43.700213031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:43.715511 containerd[1589]: time="2026-01-23T01:11:43.715456562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 01:11:43.715870 containerd[1589]: time="2026-01-23T01:11:43.715834356Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:43.717568 containerd[1589]: time="2026-01-23T01:11:43.717533510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:43.720868 containerd[1589]: time="2026-01-23T01:11:43.720713887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.804996816s" Jan 23 01:11:43.720868 containerd[1589]: time="2026-01-23T01:11:43.720775605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 01:11:43.769149 containerd[1589]: time="2026-01-23T01:11:43.769048306Z" level=info msg="CreateContainer within sandbox \"629454eea8ce3569a92b4c5090ff5299aea6f6c4e946d1515620000f853aa4e4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 01:11:43.850790 containerd[1589]: time="2026-01-23T01:11:43.850602505Z" level=info msg="Container ae6b30de3382a6a97c1dfa4156b39d65d0eef76c6c8e1d18cf0626d712fe4d6c: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:43.851072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4006372683.mount: Deactivated successfully. Jan 23 01:11:43.933540 containerd[1589]: time="2026-01-23T01:11:43.933247534Z" level=info msg="CreateContainer within sandbox \"629454eea8ce3569a92b4c5090ff5299aea6f6c4e946d1515620000f853aa4e4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ae6b30de3382a6a97c1dfa4156b39d65d0eef76c6c8e1d18cf0626d712fe4d6c\"" Jan 23 01:11:43.935540 containerd[1589]: time="2026-01-23T01:11:43.935333385Z" level=info msg="StartContainer for \"ae6b30de3382a6a97c1dfa4156b39d65d0eef76c6c8e1d18cf0626d712fe4d6c\"" Jan 23 01:11:43.941637 containerd[1589]: time="2026-01-23T01:11:43.941601746Z" level=info msg="connecting to shim ae6b30de3382a6a97c1dfa4156b39d65d0eef76c6c8e1d18cf0626d712fe4d6c" address="unix:///run/containerd/s/34ea0b45911532ff02e6adb01dbd32b73d10585da550edcdc218f59613f1a75f" protocol=ttrpc version=3 Jan 23 01:11:44.015280 systemd[1]: Started cri-containerd-ae6b30de3382a6a97c1dfa4156b39d65d0eef76c6c8e1d18cf0626d712fe4d6c.scope - libcontainer container ae6b30de3382a6a97c1dfa4156b39d65d0eef76c6c8e1d18cf0626d712fe4d6c. Jan 23 01:11:44.168757 containerd[1589]: time="2026-01-23T01:11:44.168709482Z" level=info msg="StartContainer for \"ae6b30de3382a6a97c1dfa4156b39d65d0eef76c6c8e1d18cf0626d712fe4d6c\" returns successfully" Jan 23 01:11:44.617367 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 01:11:44.621471 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 01:11:44.645186 containerd[1589]: time="2026-01-23T01:11:44.642938722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2q95q,Uid:ac789593-88de-4afb-9cdb-f9323fe8cb8a,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:44.645186 containerd[1589]: time="2026-01-23T01:11:44.643451632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rpxb6,Uid:36b120e1-773f-44d0-abdd-d8ef5044f795,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:44.646241 containerd[1589]: time="2026-01-23T01:11:44.644419042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bccfcb4d5-9cdwm,Uid:e0436e42-d053-4203-971f-0d3de78e1ec3,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:44.976414 containerd[1589]: time="2026-01-23T01:11:44.973487678Z" level=error msg="Failed to destroy network for sandbox \"3ea2ff4f4d8778c87f1d5faa2a9b17dd035d8852879158d082a5b162a86ae403\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:44.980006 systemd[1]: run-netns-cni\x2de64c394f\x2ddf76\x2d9270\x2de5e3\x2d06514ffbfa99.mount: Deactivated successfully. Jan 23 01:11:44.999421 containerd[1589]: time="2026-01-23T01:11:44.984566410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bccfcb4d5-9cdwm,Uid:e0436e42-d053-4203-971f-0d3de78e1ec3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ea2ff4f4d8778c87f1d5faa2a9b17dd035d8852879158d082a5b162a86ae403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:44.999706 kubelet[2899]: E0123 01:11:44.999114 2899 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ea2ff4f4d8778c87f1d5faa2a9b17dd035d8852879158d082a5b162a86ae403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:44.999706 kubelet[2899]: E0123 01:11:44.999429 2899 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ea2ff4f4d8778c87f1d5faa2a9b17dd035d8852879158d082a5b162a86ae403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bccfcb4d5-9cdwm" Jan 23 01:11:44.999706 kubelet[2899]: E0123 01:11:44.999465 2899 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ea2ff4f4d8778c87f1d5faa2a9b17dd035d8852879158d082a5b162a86ae403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bccfcb4d5-9cdwm" Jan 23 01:11:45.000564 kubelet[2899]: E0123 01:11:44.999958 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bccfcb4d5-9cdwm_calico-system(e0436e42-d053-4203-971f-0d3de78e1ec3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bccfcb4d5-9cdwm_calico-system(e0436e42-d053-4203-971f-0d3de78e1ec3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ea2ff4f4d8778c87f1d5faa2a9b17dd035d8852879158d082a5b162a86ae403\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bccfcb4d5-9cdwm" podUID="e0436e42-d053-4203-971f-0d3de78e1ec3" Jan 23 01:11:45.000679 containerd[1589]: time="2026-01-23T01:11:45.000201650Z" level=error msg="Failed to destroy network for sandbox \"4869ae854988e6d5f304fa371e8ab1891fe35e6bb247941994c1a299bc4e2b6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:45.006978 systemd[1]: run-netns-cni\x2d2986bc57\x2dbc17\x2d992e\x2dc252\x2d60297471ff54.mount: Deactivated successfully. Jan 23 01:11:45.011160 containerd[1589]: time="2026-01-23T01:11:45.010517805Z" level=error msg="Failed to destroy network for sandbox \"f4dcd8b10247ecb808ee5971b8b8aa92fa6b9cc978ff39caf587038a1605ac08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:45.015446 containerd[1589]: time="2026-01-23T01:11:45.014800856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rpxb6,Uid:36b120e1-773f-44d0-abdd-d8ef5044f795,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4869ae854988e6d5f304fa371e8ab1891fe35e6bb247941994c1a299bc4e2b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:45.017326 containerd[1589]: time="2026-01-23T01:11:45.016422967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2q95q,Uid:ac789593-88de-4afb-9cdb-f9323fe8cb8a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4dcd8b10247ecb808ee5971b8b8aa92fa6b9cc978ff39caf587038a1605ac08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:45.018209 kubelet[2899]: E0123 01:11:45.017662 2899 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4dcd8b10247ecb808ee5971b8b8aa92fa6b9cc978ff39caf587038a1605ac08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:45.018209 kubelet[2899]: E0123 01:11:45.017695 2899 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4869ae854988e6d5f304fa371e8ab1891fe35e6bb247941994c1a299bc4e2b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:45.018209 kubelet[2899]: E0123 01:11:45.017732 2899 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4dcd8b10247ecb808ee5971b8b8aa92fa6b9cc978ff39caf587038a1605ac08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2q95q" Jan 23 01:11:45.018209 kubelet[2899]: E0123 01:11:45.017774 2899 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4dcd8b10247ecb808ee5971b8b8aa92fa6b9cc978ff39caf587038a1605ac08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2q95q" Jan 23 01:11:45.018690 kubelet[2899]: E0123 01:11:45.017874 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2q95q_calico-system(ac789593-88de-4afb-9cdb-f9323fe8cb8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2q95q_calico-system(ac789593-88de-4afb-9cdb-f9323fe8cb8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4dcd8b10247ecb808ee5971b8b8aa92fa6b9cc978ff39caf587038a1605ac08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:11:45.018690 kubelet[2899]: E0123 01:11:45.017907 2899 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4869ae854988e6d5f304fa371e8ab1891fe35e6bb247941994c1a299bc4e2b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rpxb6" Jan 23 01:11:45.019717 kubelet[2899]: E0123 01:11:45.018253 2899 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4869ae854988e6d5f304fa371e8ab1891fe35e6bb247941994c1a299bc4e2b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rpxb6" Jan 23 01:11:45.023185 kubelet[2899]: E0123 01:11:45.023123 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-rpxb6_kube-system(36b120e1-773f-44d0-abdd-d8ef5044f795)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-rpxb6_kube-system(36b120e1-773f-44d0-abdd-d8ef5044f795)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4869ae854988e6d5f304fa371e8ab1891fe35e6bb247941994c1a299bc4e2b6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-rpxb6" podUID="36b120e1-773f-44d0-abdd-d8ef5044f795" Jan 23 01:11:45.070072 kubelet[2899]: I0123 01:11:45.066590 2899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7j9rd" podStartSLOduration=2.8356145919999998 podStartE2EDuration="27.066569191s" podCreationTimestamp="2026-01-23 01:11:18 +0000 UTC" firstStartedPulling="2026-01-23 01:11:19.491232687 +0000 UTC m=+32.139820953" lastFinishedPulling="2026-01-23 01:11:43.722187305 +0000 UTC m=+56.370775552" observedRunningTime="2026-01-23 01:11:45.056933822 +0000 UTC m=+57.705522115" watchObservedRunningTime="2026-01-23 01:11:45.066569191 +0000 UTC m=+57.715157457" Jan 23 01:11:45.626319 systemd[1]: run-netns-cni\x2d55e71f4b\x2de2e4\x2d790f\x2d47a0\x2d6ba5aa7f94d5.mount: Deactivated successfully. Jan 23 01:11:45.630577 containerd[1589]: time="2026-01-23T01:11:45.629992000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55994859c6-2x5qp,Uid:bc145d36-eea8-4680-ac11-0b79793cc035,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:46.060525 systemd-networkd[1481]: cali01a8699df00: Link UP Jan 23 01:11:46.062730 systemd-networkd[1481]: cali01a8699df00: Gained carrier Jan 23 01:11:46.112674 containerd[1589]: 2026-01-23 01:11:45.676 [INFO][4099] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:11:46.112674 containerd[1589]: 2026-01-23 01:11:45.715 [INFO][4099] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p26ko.gb1.brightbox.com-k8s-calico--kube--controllers--55994859c6--2x5qp-eth0 calico-kube-controllers-55994859c6- calico-system bc145d36-eea8-4680-ac11-0b79793cc035 905 0 2026-01-23 01:11:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55994859c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-p26ko.gb1.brightbox.com calico-kube-controllers-55994859c6-2x5qp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali01a8699df00 [] [] }} ContainerID="0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" Namespace="calico-system" Pod="calico-kube-controllers-55994859c6-2x5qp" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--kube--controllers--55994859c6--2x5qp-" Jan 23 01:11:46.112674 containerd[1589]: 2026-01-23 01:11:45.716 [INFO][4099] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" Namespace="calico-system" Pod="calico-kube-controllers-55994859c6-2x5qp" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--kube--controllers--55994859c6--2x5qp-eth0" Jan 23 01:11:46.112674 containerd[1589]: 2026-01-23 01:11:45.936 [INFO][4118] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" HandleID="k8s-pod-network.0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" Workload="srv--p26ko.gb1.brightbox.com-k8s-calico--kube--controllers--55994859c6--2x5qp-eth0" Jan 23 01:11:46.114416 containerd[1589]: 2026-01-23 01:11:45.938 [INFO][4118] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" HandleID="k8s-pod-network.0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" Workload="srv--p26ko.gb1.brightbox.com-k8s-calico--kube--controllers--55994859c6--2x5qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f980), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-p26ko.gb1.brightbox.com", "pod":"calico-kube-controllers-55994859c6-2x5qp", "timestamp":"2026-01-23 01:11:45.93637288 +0000 UTC"}, Hostname:"srv-p26ko.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:46.114416 containerd[1589]: 2026-01-23 01:11:45.938 [INFO][4118] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:46.114416 containerd[1589]: 2026-01-23 01:11:45.939 [INFO][4118] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:46.114416 containerd[1589]: 2026-01-23 01:11:45.939 [INFO][4118] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p26ko.gb1.brightbox.com' Jan 23 01:11:46.114416 containerd[1589]: 2026-01-23 01:11:45.961 [INFO][4118] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:46.114416 containerd[1589]: 2026-01-23 01:11:45.973 [INFO][4118] ipam/ipam.go 394: Looking up existing affinities for host host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:46.114416 containerd[1589]: 2026-01-23 01:11:45.979 [INFO][4118] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:46.114416 containerd[1589]: 2026-01-23 01:11:45.982 [INFO][4118] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:46.114416 containerd[1589]: 2026-01-23 01:11:45.985 [INFO][4118] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:46.117972 containerd[1589]: 2026-01-23 01:11:45.985 [INFO][4118] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:46.117972 containerd[1589]: 2026-01-23 01:11:45.989 [INFO][4118] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9 Jan 23 01:11:46.117972 containerd[1589]: 2026-01-23 01:11:45.995 [INFO][4118] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:46.117972 containerd[1589]: 2026-01-23 01:11:46.003 [INFO][4118] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.46.129/26] block=192.168.46.128/26 handle="k8s-pod-network.0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:46.117972 containerd[1589]: 2026-01-23 01:11:46.003 [INFO][4118] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.129/26] handle="k8s-pod-network.0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:46.117972 containerd[1589]: 2026-01-23 01:11:46.003 [INFO][4118] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:46.117972 containerd[1589]: 2026-01-23 01:11:46.003 [INFO][4118] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.46.129/26] IPv6=[] ContainerID="0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" HandleID="k8s-pod-network.0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" Workload="srv--p26ko.gb1.brightbox.com-k8s-calico--kube--controllers--55994859c6--2x5qp-eth0" Jan 23 01:11:46.118980 containerd[1589]: 2026-01-23 01:11:46.013 [INFO][4099] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" Namespace="calico-system" Pod="calico-kube-controllers-55994859c6-2x5qp" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--kube--controllers--55994859c6--2x5qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-calico--kube--controllers--55994859c6--2x5qp-eth0", GenerateName:"calico-kube-controllers-55994859c6-", Namespace:"calico-system", SelfLink:"", UID:"bc145d36-eea8-4680-ac11-0b79793cc035", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55994859c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-55994859c6-2x5qp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.46.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali01a8699df00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:46.119096 containerd[1589]: 2026-01-23 01:11:46.013 [INFO][4099] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.129/32] ContainerID="0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" Namespace="calico-system" Pod="calico-kube-controllers-55994859c6-2x5qp" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--kube--controllers--55994859c6--2x5qp-eth0" Jan 23 01:11:46.119096 containerd[1589]: 2026-01-23 01:11:46.013 [INFO][4099] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01a8699df00 ContainerID="0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" Namespace="calico-system" Pod="calico-kube-controllers-55994859c6-2x5qp" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--kube--controllers--55994859c6--2x5qp-eth0" Jan 23 01:11:46.119096 containerd[1589]: 2026-01-23 01:11:46.065 [INFO][4099] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" Namespace="calico-system" Pod="calico-kube-controllers-55994859c6-2x5qp" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--kube--controllers--55994859c6--2x5qp-eth0" Jan 23 01:11:46.123227 containerd[1589]: 2026-01-23 01:11:46.076 [INFO][4099] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" Namespace="calico-system" Pod="calico-kube-controllers-55994859c6-2x5qp" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--kube--controllers--55994859c6--2x5qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-calico--kube--controllers--55994859c6--2x5qp-eth0", GenerateName:"calico-kube-controllers-55994859c6-", Namespace:"calico-system", SelfLink:"", UID:"bc145d36-eea8-4680-ac11-0b79793cc035", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55994859c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9", Pod:"calico-kube-controllers-55994859c6-2x5qp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.46.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali01a8699df00", MAC:"ce:22:9c:ed:f4:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:46.123324 containerd[1589]: 2026-01-23 01:11:46.107 [INFO][4099] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" Namespace="calico-system" Pod="calico-kube-controllers-55994859c6-2x5qp" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--kube--controllers--55994859c6--2x5qp-eth0" Jan 23 01:11:46.187016 kubelet[2899]: I0123 01:11:46.185547 2899 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl2td\" (UniqueName: \"kubernetes.io/projected/e0436e42-d053-4203-971f-0d3de78e1ec3-kube-api-access-tl2td\") pod \"e0436e42-d053-4203-971f-0d3de78e1ec3\" (UID: \"e0436e42-d053-4203-971f-0d3de78e1ec3\") " Jan 23 01:11:46.187016 kubelet[2899]: I0123 01:11:46.186486 2899 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0436e42-d053-4203-971f-0d3de78e1ec3-whisker-ca-bundle\") pod \"e0436e42-d053-4203-971f-0d3de78e1ec3\" (UID: \"e0436e42-d053-4203-971f-0d3de78e1ec3\") " Jan 23 01:11:46.187016 kubelet[2899]: I0123 01:11:46.186574 2899 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0436e42-d053-4203-971f-0d3de78e1ec3-whisker-backend-key-pair\") pod \"e0436e42-d053-4203-971f-0d3de78e1ec3\" (UID: \"e0436e42-d053-4203-971f-0d3de78e1ec3\") " Jan 23 01:11:46.205903 kubelet[2899]: I0123 01:11:46.205653 2899 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0436e42-d053-4203-971f-0d3de78e1ec3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e0436e42-d053-4203-971f-0d3de78e1ec3" (UID: "e0436e42-d053-4203-971f-0d3de78e1ec3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:11:46.217285 kubelet[2899]: I0123 01:11:46.217247 2899 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0436e42-d053-4203-971f-0d3de78e1ec3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e0436e42-d053-4203-971f-0d3de78e1ec3" (UID: "e0436e42-d053-4203-971f-0d3de78e1ec3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:11:46.221851 systemd[1]: var-lib-kubelet-pods-e0436e42\x2dd053\x2d4203\x2d971f\x2d0d3de78e1ec3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 01:11:46.233088 systemd[1]: var-lib-kubelet-pods-e0436e42\x2dd053\x2d4203\x2d971f\x2d0d3de78e1ec3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtl2td.mount: Deactivated successfully. Jan 23 01:11:46.233498 kubelet[2899]: I0123 01:11:46.233459 2899 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0436e42-d053-4203-971f-0d3de78e1ec3-kube-api-access-tl2td" (OuterVolumeSpecName: "kube-api-access-tl2td") pod "e0436e42-d053-4203-971f-0d3de78e1ec3" (UID: "e0436e42-d053-4203-971f-0d3de78e1ec3"). InnerVolumeSpecName "kube-api-access-tl2td". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:11:46.287335 kubelet[2899]: I0123 01:11:46.287212 2899 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0436e42-d053-4203-971f-0d3de78e1ec3-whisker-backend-key-pair\") on node \"srv-p26ko.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:11:46.287335 kubelet[2899]: I0123 01:11:46.287274 2899 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tl2td\" (UniqueName: \"kubernetes.io/projected/e0436e42-d053-4203-971f-0d3de78e1ec3-kube-api-access-tl2td\") on node \"srv-p26ko.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:11:46.287335 kubelet[2899]: I0123 01:11:46.287298 2899 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0436e42-d053-4203-971f-0d3de78e1ec3-whisker-ca-bundle\") on node \"srv-p26ko.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:11:46.352940 containerd[1589]: time="2026-01-23T01:11:46.350374678Z" level=info msg="connecting to shim 0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9" address="unix:///run/containerd/s/b6e7a48f1c739787860dd63bcd9e110d73c73626ec300824052d4bd590e72f64" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:46.391661 systemd[1]: Started cri-containerd-0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9.scope - libcontainer container 0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9. Jan 23 01:11:46.501967 containerd[1589]: time="2026-01-23T01:11:46.501899496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55994859c6-2x5qp,Uid:bc145d36-eea8-4680-ac11-0b79793cc035,Namespace:calico-system,Attempt:0,} returns sandbox id \"0eab956b11b9e7eec6523eb461c5fa6057698e56acb11bee2856dfef06a42dd9\"" Jan 23 01:11:46.514412 containerd[1589]: time="2026-01-23T01:11:46.513918382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:11:46.626081 containerd[1589]: time="2026-01-23T01:11:46.624734018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769444c77-6h5s4,Uid:d38a34ac-d16c-44a2-b363-28d164fb855d,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:11:46.628430 containerd[1589]: time="2026-01-23T01:11:46.627984116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tz2wv,Uid:f4603d7b-3b99-4c95-a909-967677b55cd1,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:46.630200 containerd[1589]: time="2026-01-23T01:11:46.630166452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f4t5v,Uid:6ae36994-0284-456d-8619-5a1f2ff25c95,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:46.886776 containerd[1589]: time="2026-01-23T01:11:46.885948317Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:46.893696 containerd[1589]: time="2026-01-23T01:11:46.893285190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:11:46.893696 containerd[1589]: time="2026-01-23T01:11:46.893423126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:11:46.894000 kubelet[2899]: E0123 01:11:46.893893 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:11:46.897475 kubelet[2899]: E0123 01:11:46.897251 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:11:46.910602 kubelet[2899]: E0123 01:11:46.908973 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-55994859c6-2x5qp_calico-system(bc145d36-eea8-4680-ac11-0b79793cc035): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:46.917029 kubelet[2899]: E0123 01:11:46.911449 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" podUID="bc145d36-eea8-4680-ac11-0b79793cc035" Jan 23 01:11:47.085820 systemd-networkd[1481]: calic044df27196: Link UP Jan 23 01:11:47.088913 systemd-networkd[1481]: calic044df27196: Gained carrier Jan 23 01:11:47.092561 kubelet[2899]: E0123 01:11:47.091905 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" podUID="bc145d36-eea8-4680-ac11-0b79793cc035" Jan 23 01:11:47.127949 systemd[1]: Removed slice kubepods-besteffort-pode0436e42_d053_4203_971f_0d3de78e1ec3.slice - libcontainer container kubepods-besteffort-pode0436e42_d053_4203_971f_0d3de78e1ec3.slice. Jan 23 01:11:47.161971 containerd[1589]: 2026-01-23 01:11:46.791 [INFO][4214] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:11:47.161971 containerd[1589]: 2026-01-23 01:11:46.841 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p26ko.gb1.brightbox.com-k8s-goldmane--7c778bb748--f4t5v-eth0 goldmane-7c778bb748- calico-system 6ae36994-0284-456d-8619-5a1f2ff25c95 902 0 2026-01-23 01:11:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-p26ko.gb1.brightbox.com goldmane-7c778bb748-f4t5v eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic044df27196 [] [] }} ContainerID="7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" Namespace="calico-system" Pod="goldmane-7c778bb748-f4t5v" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-goldmane--7c778bb748--f4t5v-" Jan 23 01:11:47.161971 containerd[1589]: 2026-01-23 01:11:46.843 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" Namespace="calico-system" Pod="goldmane-7c778bb748-f4t5v" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-goldmane--7c778bb748--f4t5v-eth0" Jan 23 01:11:47.161971 containerd[1589]: 2026-01-23 01:11:46.964 [INFO][4295] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" HandleID="k8s-pod-network.7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" Workload="srv--p26ko.gb1.brightbox.com-k8s-goldmane--7c778bb748--f4t5v-eth0" Jan 23 01:11:47.165502 containerd[1589]: 2026-01-23 01:11:46.964 [INFO][4295] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" HandleID="k8s-pod-network.7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" Workload="srv--p26ko.gb1.brightbox.com-k8s-goldmane--7c778bb748--f4t5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ad530), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-p26ko.gb1.brightbox.com", "pod":"goldmane-7c778bb748-f4t5v", "timestamp":"2026-01-23 01:11:46.964094505 +0000 UTC"}, Hostname:"srv-p26ko.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:47.165502 containerd[1589]: 2026-01-23 01:11:46.964 [INFO][4295] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:47.165502 containerd[1589]: 2026-01-23 01:11:46.964 [INFO][4295] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:47.165502 containerd[1589]: 2026-01-23 01:11:46.964 [INFO][4295] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p26ko.gb1.brightbox.com' Jan 23 01:11:47.165502 containerd[1589]: 2026-01-23 01:11:46.978 [INFO][4295] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.165502 containerd[1589]: 2026-01-23 01:11:46.991 [INFO][4295] ipam/ipam.go 394: Looking up existing affinities for host host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.165502 containerd[1589]: 2026-01-23 01:11:47.004 [INFO][4295] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.165502 containerd[1589]: 2026-01-23 01:11:47.007 [INFO][4295] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.165502 containerd[1589]: 2026-01-23 01:11:47.011 [INFO][4295] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.165969 containerd[1589]: 2026-01-23 01:11:47.012 [INFO][4295] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.165969 containerd[1589]: 2026-01-23 01:11:47.016 [INFO][4295] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff Jan 23 01:11:47.165969 containerd[1589]: 2026-01-23 01:11:47.030 [INFO][4295] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.165969 containerd[1589]: 2026-01-23 01:11:47.055 [INFO][4295] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.46.130/26] block=192.168.46.128/26 handle="k8s-pod-network.7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.165969 containerd[1589]: 2026-01-23 01:11:47.055 [INFO][4295] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.130/26] handle="k8s-pod-network.7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.165969 containerd[1589]: 2026-01-23 01:11:47.055 [INFO][4295] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:47.165969 containerd[1589]: 2026-01-23 01:11:47.056 [INFO][4295] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.46.130/26] IPv6=[] ContainerID="7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" HandleID="k8s-pod-network.7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" Workload="srv--p26ko.gb1.brightbox.com-k8s-goldmane--7c778bb748--f4t5v-eth0" Jan 23 01:11:47.172928 containerd[1589]: 2026-01-23 01:11:47.070 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" Namespace="calico-system" Pod="goldmane-7c778bb748-f4t5v" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-goldmane--7c778bb748--f4t5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-goldmane--7c778bb748--f4t5v-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6ae36994-0284-456d-8619-5a1f2ff25c95", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-7c778bb748-f4t5v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.46.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic044df27196", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:47.173052 containerd[1589]: 2026-01-23 01:11:47.070 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.130/32] ContainerID="7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" Namespace="calico-system" Pod="goldmane-7c778bb748-f4t5v" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-goldmane--7c778bb748--f4t5v-eth0" Jan 23 01:11:47.173052 containerd[1589]: 2026-01-23 01:11:47.070 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic044df27196 ContainerID="7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" Namespace="calico-system" Pod="goldmane-7c778bb748-f4t5v" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-goldmane--7c778bb748--f4t5v-eth0" Jan 23 01:11:47.173052 containerd[1589]: 2026-01-23 01:11:47.098 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" Namespace="calico-system" Pod="goldmane-7c778bb748-f4t5v" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-goldmane--7c778bb748--f4t5v-eth0" Jan 23 01:11:47.173225 containerd[1589]: 2026-01-23 01:11:47.099 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" Namespace="calico-system" Pod="goldmane-7c778bb748-f4t5v" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-goldmane--7c778bb748--f4t5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-goldmane--7c778bb748--f4t5v-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6ae36994-0284-456d-8619-5a1f2ff25c95", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff", Pod:"goldmane-7c778bb748-f4t5v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.46.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic044df27196", MAC:"32:0e:10:56:22:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:47.173331 containerd[1589]: 2026-01-23 01:11:47.134 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" Namespace="calico-system" Pod="goldmane-7c778bb748-f4t5v" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-goldmane--7c778bb748--f4t5v-eth0" Jan 23 01:11:47.327012 systemd-networkd[1481]: cali34a6a1806bc: Link UP Jan 23 01:11:47.329536 systemd-networkd[1481]: cali34a6a1806bc: Gained carrier Jan 23 01:11:47.346840 containerd[1589]: time="2026-01-23T01:11:47.346735171Z" level=info msg="connecting to shim 7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff" address="unix:///run/containerd/s/53bc5dbc74f1b3aaa40dfac35d7300b1b5111578fd78b0f4ed01014fa7bc509f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:47.397217 containerd[1589]: 2026-01-23 01:11:46.720 [INFO][4203] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:11:47.397217 containerd[1589]: 2026-01-23 01:11:46.765 [INFO][4203] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--6h5s4-eth0 calico-apiserver-769444c77- calico-apiserver d38a34ac-d16c-44a2-b363-28d164fb855d 901 0 2026-01-23 01:11:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:769444c77 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-p26ko.gb1.brightbox.com calico-apiserver-769444c77-6h5s4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali34a6a1806bc [] [] }} ContainerID="5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-6h5s4" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--6h5s4-" Jan 23 01:11:47.397217 containerd[1589]: 2026-01-23 01:11:46.765 [INFO][4203] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-6h5s4" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--6h5s4-eth0" Jan 23 01:11:47.397217 containerd[1589]: 2026-01-23 01:11:46.964 [INFO][4271] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" HandleID="k8s-pod-network.5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" Workload="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--6h5s4-eth0" Jan 23 01:11:47.397864 containerd[1589]: 2026-01-23 01:11:46.965 [INFO][4271] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" HandleID="k8s-pod-network.5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" Workload="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--6h5s4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a94d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-p26ko.gb1.brightbox.com", "pod":"calico-apiserver-769444c77-6h5s4", "timestamp":"2026-01-23 01:11:46.964641998 +0000 UTC"}, Hostname:"srv-p26ko.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:47.397864 containerd[1589]: 2026-01-23 01:11:46.966 [INFO][4271] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:47.397864 containerd[1589]: 2026-01-23 01:11:47.055 [INFO][4271] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:47.397864 containerd[1589]: 2026-01-23 01:11:47.061 [INFO][4271] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p26ko.gb1.brightbox.com' Jan 23 01:11:47.397864 containerd[1589]: 2026-01-23 01:11:47.111 [INFO][4271] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.397864 containerd[1589]: 2026-01-23 01:11:47.165 [INFO][4271] ipam/ipam.go 394: Looking up existing affinities for host host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.397864 containerd[1589]: 2026-01-23 01:11:47.198 [INFO][4271] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.397864 containerd[1589]: 2026-01-23 01:11:47.207 [INFO][4271] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.397864 containerd[1589]: 2026-01-23 01:11:47.222 [INFO][4271] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.400030 containerd[1589]: 2026-01-23 01:11:47.222 [INFO][4271] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.400030 containerd[1589]: 2026-01-23 01:11:47.227 [INFO][4271] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d Jan 23 01:11:47.400030 containerd[1589]: 2026-01-23 01:11:47.262 [INFO][4271] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.400030 containerd[1589]: 2026-01-23 01:11:47.295 [INFO][4271] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.46.131/26] block=192.168.46.128/26 handle="k8s-pod-network.5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.400030 containerd[1589]: 2026-01-23 01:11:47.295 [INFO][4271] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.131/26] handle="k8s-pod-network.5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.400030 containerd[1589]: 2026-01-23 01:11:47.300 [INFO][4271] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:47.400030 containerd[1589]: 2026-01-23 01:11:47.300 [INFO][4271] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.46.131/26] IPv6=[] ContainerID="5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" HandleID="k8s-pod-network.5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" Workload="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--6h5s4-eth0" Jan 23 01:11:47.400506 containerd[1589]: 2026-01-23 01:11:47.308 [INFO][4203] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-6h5s4" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--6h5s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--6h5s4-eth0", GenerateName:"calico-apiserver-769444c77-", Namespace:"calico-apiserver", SelfLink:"", UID:"d38a34ac-d16c-44a2-b363-28d164fb855d", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769444c77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-769444c77-6h5s4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34a6a1806bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:47.400607 containerd[1589]: 2026-01-23 01:11:47.308 [INFO][4203] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.131/32] ContainerID="5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-6h5s4" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--6h5s4-eth0" Jan 23 01:11:47.400607 containerd[1589]: 2026-01-23 01:11:47.309 [INFO][4203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34a6a1806bc ContainerID="5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-6h5s4" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--6h5s4-eth0" Jan 23 01:11:47.400607 containerd[1589]: 2026-01-23 01:11:47.336 [INFO][4203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-6h5s4" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--6h5s4-eth0" Jan 23 01:11:47.400790 containerd[1589]: 2026-01-23 01:11:47.343 [INFO][4203] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-6h5s4" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--6h5s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--6h5s4-eth0", GenerateName:"calico-apiserver-769444c77-", Namespace:"calico-apiserver", SelfLink:"", UID:"d38a34ac-d16c-44a2-b363-28d164fb855d", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769444c77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d", Pod:"calico-apiserver-769444c77-6h5s4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34a6a1806bc", MAC:"32:5c:60:69:d8:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:47.400909 containerd[1589]: 2026-01-23 01:11:47.382 [INFO][4203] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-6h5s4" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--6h5s4-eth0" Jan 23 01:11:47.505242 systemd[1]: Created slice kubepods-besteffort-poded71740a_4cd8_4c4d_959e_402af8a98785.slice - libcontainer container kubepods-besteffort-poded71740a_4cd8_4c4d_959e_402af8a98785.slice. Jan 23 01:11:47.518293 kubelet[2899]: I0123 01:11:47.514367 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed71740a-4cd8-4c4d-959e-402af8a98785-whisker-backend-key-pair\") pod \"whisker-5b94489fd9-glnbg\" (UID: \"ed71740a-4cd8-4c4d-959e-402af8a98785\") " pod="calico-system/whisker-5b94489fd9-glnbg" Jan 23 01:11:47.518293 kubelet[2899]: I0123 01:11:47.517759 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed71740a-4cd8-4c4d-959e-402af8a98785-whisker-ca-bundle\") pod \"whisker-5b94489fd9-glnbg\" (UID: \"ed71740a-4cd8-4c4d-959e-402af8a98785\") " pod="calico-system/whisker-5b94489fd9-glnbg" Jan 23 01:11:47.518293 kubelet[2899]: I0123 01:11:47.517856 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkdgl\" (UniqueName: \"kubernetes.io/projected/ed71740a-4cd8-4c4d-959e-402af8a98785-kube-api-access-hkdgl\") pod \"whisker-5b94489fd9-glnbg\" (UID: \"ed71740a-4cd8-4c4d-959e-402af8a98785\") " pod="calico-system/whisker-5b94489fd9-glnbg" Jan 23 01:11:47.521061 systemd[1]: Started cri-containerd-7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff.scope - libcontainer container 7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff. Jan 23 01:11:47.664178 systemd-networkd[1481]: cali8881978a645: Link UP Jan 23 01:11:47.666764 systemd-networkd[1481]: cali8881978a645: Gained carrier Jan 23 01:11:47.700631 containerd[1589]: time="2026-01-23T01:11:47.700567401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f7549777-v6lv7,Uid:236b218f-d8af-4e9e-b6b6-8f9ea312a2ce,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:11:47.712810 containerd[1589]: time="2026-01-23T01:11:47.712737364Z" level=info msg="connecting to shim 5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d" address="unix:///run/containerd/s/1fb63a22464fd1c7336cb82eae4e33add98a479a24cc1e8dc980ddb5ba302b9e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:47.720051 containerd[1589]: time="2026-01-23T01:11:47.719965967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769444c77-774wh,Uid:da5b2d2c-13cd-4988-8a1e-436e3c779260,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:11:47.740433 kubelet[2899]: I0123 01:11:47.739774 2899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0436e42-d053-4203-971f-0d3de78e1ec3" path="/var/lib/kubelet/pods/e0436e42-d053-4203-971f-0d3de78e1ec3/volumes" Jan 23 01:11:47.841488 containerd[1589]: 2026-01-23 01:11:46.804 [INFO][4206] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:11:47.841488 containerd[1589]: 2026-01-23 01:11:46.854 [INFO][4206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--tz2wv-eth0 coredns-66bc5c9577- kube-system f4603d7b-3b99-4c95-a909-967677b55cd1 892 0 2026-01-23 01:10:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-p26ko.gb1.brightbox.com coredns-66bc5c9577-tz2wv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8881978a645 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" Namespace="kube-system" Pod="coredns-66bc5c9577-tz2wv" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--tz2wv-" Jan 23 01:11:47.841488 containerd[1589]: 2026-01-23 01:11:46.854 [INFO][4206] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" Namespace="kube-system" Pod="coredns-66bc5c9577-tz2wv" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--tz2wv-eth0" Jan 23 01:11:47.841488 containerd[1589]: 2026-01-23 01:11:46.983 [INFO][4298] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" HandleID="k8s-pod-network.a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" Workload="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--tz2wv-eth0" Jan 23 01:11:47.841972 containerd[1589]: 2026-01-23 01:11:46.983 [INFO][4298] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" HandleID="k8s-pod-network.a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" Workload="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--tz2wv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000381e80), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-p26ko.gb1.brightbox.com", "pod":"coredns-66bc5c9577-tz2wv", "timestamp":"2026-01-23 01:11:46.983520044 +0000 UTC"}, Hostname:"srv-p26ko.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:47.841972 containerd[1589]: 2026-01-23 01:11:46.983 [INFO][4298] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:47.841972 containerd[1589]: 2026-01-23 01:11:47.300 [INFO][4298] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:47.841972 containerd[1589]: 2026-01-23 01:11:47.301 [INFO][4298] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p26ko.gb1.brightbox.com' Jan 23 01:11:47.841972 containerd[1589]: 2026-01-23 01:11:47.377 [INFO][4298] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.841972 containerd[1589]: 2026-01-23 01:11:47.394 [INFO][4298] ipam/ipam.go 394: Looking up existing affinities for host host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.841972 containerd[1589]: 2026-01-23 01:11:47.421 [INFO][4298] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.841972 containerd[1589]: 2026-01-23 01:11:47.445 [INFO][4298] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.841972 containerd[1589]: 2026-01-23 01:11:47.463 [INFO][4298] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.842870 containerd[1589]: 2026-01-23 01:11:47.464 [INFO][4298] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.842870 containerd[1589]: 2026-01-23 01:11:47.480 [INFO][4298] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a Jan 23 01:11:47.842870 containerd[1589]: 2026-01-23 01:11:47.531 [INFO][4298] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.842870 containerd[1589]: 2026-01-23 01:11:47.556 [INFO][4298] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.46.132/26] block=192.168.46.128/26 handle="k8s-pod-network.a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.842870 containerd[1589]: 2026-01-23 01:11:47.556 [INFO][4298] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.132/26] handle="k8s-pod-network.a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:47.842870 containerd[1589]: 2026-01-23 01:11:47.556 [INFO][4298] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:47.842870 containerd[1589]: 2026-01-23 01:11:47.556 [INFO][4298] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.46.132/26] IPv6=[] ContainerID="a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" HandleID="k8s-pod-network.a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" Workload="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--tz2wv-eth0" Jan 23 01:11:47.843214 containerd[1589]: 2026-01-23 01:11:47.627 [INFO][4206] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" Namespace="kube-system" Pod="coredns-66bc5c9577-tz2wv" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--tz2wv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--tz2wv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f4603d7b-3b99-4c95-a909-967677b55cd1", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-tz2wv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8881978a645", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:47.843214 containerd[1589]: 2026-01-23 01:11:47.649 [INFO][4206] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.132/32] ContainerID="a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" Namespace="kube-system" Pod="coredns-66bc5c9577-tz2wv" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--tz2wv-eth0" Jan 23 01:11:47.843214 containerd[1589]: 2026-01-23 01:11:47.649 [INFO][4206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8881978a645 ContainerID="a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" Namespace="kube-system" Pod="coredns-66bc5c9577-tz2wv" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--tz2wv-eth0" Jan 23 01:11:47.843214 containerd[1589]: 2026-01-23 01:11:47.669 [INFO][4206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" Namespace="kube-system" Pod="coredns-66bc5c9577-tz2wv" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--tz2wv-eth0" Jan 23 01:11:47.843214 containerd[1589]: 2026-01-23 01:11:47.671 [INFO][4206] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" Namespace="kube-system" Pod="coredns-66bc5c9577-tz2wv" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--tz2wv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--tz2wv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f4603d7b-3b99-4c95-a909-967677b55cd1", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a", Pod:"coredns-66bc5c9577-tz2wv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8881978a645", MAC:"ea:a2:c0:c4:cb:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:47.844751 containerd[1589]: 2026-01-23 01:11:47.791 [INFO][4206] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" Namespace="kube-system" Pod="coredns-66bc5c9577-tz2wv" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--tz2wv-eth0" Jan 23 01:11:47.847404 containerd[1589]: time="2026-01-23T01:11:47.846324734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b94489fd9-glnbg,Uid:ed71740a-4cd8-4c4d-959e-402af8a98785,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:47.968051 systemd-networkd[1481]: cali01a8699df00: Gained IPv6LL Jan 23 01:11:48.049961 kubelet[2899]: E0123 01:11:48.049672 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" podUID="bc145d36-eea8-4680-ac11-0b79793cc035" Jan 23 01:11:48.095061 systemd[1]: Started cri-containerd-5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d.scope - libcontainer container 5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d. Jan 23 01:11:48.115871 containerd[1589]: time="2026-01-23T01:11:48.115702571Z" level=info msg="connecting to shim a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a" address="unix:///run/containerd/s/19c6ff476a4b6f353a23d79d07f733fc7c8bdc164f44470ce00452391d871096" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:48.269447 containerd[1589]: time="2026-01-23T01:11:48.269120134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f4t5v,Uid:6ae36994-0284-456d-8619-5a1f2ff25c95,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f186365d9e0d081900bcfc55d06d7fc294e715fe16598624b1a66af2c409eff\"" Jan 23 01:11:48.277404 containerd[1589]: time="2026-01-23T01:11:48.276712584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:11:48.341050 systemd[1]: Started cri-containerd-a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a.scope - libcontainer container a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a. Jan 23 01:11:48.589001 containerd[1589]: time="2026-01-23T01:11:48.588924309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tz2wv,Uid:f4603d7b-3b99-4c95-a909-967677b55cd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a\"" Jan 23 01:11:48.608088 containerd[1589]: time="2026-01-23T01:11:48.607875539Z" level=info msg="CreateContainer within sandbox \"a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:11:48.644603 containerd[1589]: time="2026-01-23T01:11:48.644536768Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:48.654582 containerd[1589]: time="2026-01-23T01:11:48.653932356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:11:48.655693 containerd[1589]: time="2026-01-23T01:11:48.654413308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:11:48.656088 kubelet[2899]: E0123 01:11:48.655311 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:11:48.656088 kubelet[2899]: E0123 01:11:48.655853 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:11:48.657704 kubelet[2899]: E0123 01:11:48.657016 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-f4t5v_calico-system(6ae36994-0284-456d-8619-5a1f2ff25c95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:48.657704 kubelet[2899]: E0123 01:11:48.657091 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f4t5v" podUID="6ae36994-0284-456d-8619-5a1f2ff25c95" Jan 23 01:11:48.667586 systemd-networkd[1481]: califbecf82bb39: Link UP Jan 23 01:11:48.686722 systemd-networkd[1481]: califbecf82bb39: Gained carrier Jan 23 01:11:48.699077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1528492953.mount: Deactivated successfully. Jan 23 01:11:48.714737 containerd[1589]: time="2026-01-23T01:11:48.711808561Z" level=info msg="Container 128039cbf8b01e921974666750f70769172e0e2eaadb37e6ba4d261e4da4ef10: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:48.734008 containerd[1589]: time="2026-01-23T01:11:48.733911922Z" level=info msg="CreateContainer within sandbox \"a60ea92933eebf074f4087511e3c3da8157548c5396b3f4a35491bda33495f1a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"128039cbf8b01e921974666750f70769172e0e2eaadb37e6ba4d261e4da4ef10\"" Jan 23 01:11:48.736858 containerd[1589]: time="2026-01-23T01:11:48.736566694Z" level=info msg="StartContainer for \"128039cbf8b01e921974666750f70769172e0e2eaadb37e6ba4d261e4da4ef10\"" Jan 23 01:11:48.740930 containerd[1589]: time="2026-01-23T01:11:48.740890897Z" level=info msg="connecting to shim 128039cbf8b01e921974666750f70769172e0e2eaadb37e6ba4d261e4da4ef10" address="unix:///run/containerd/s/19c6ff476a4b6f353a23d79d07f733fc7c8bdc164f44470ce00452391d871096" protocol=ttrpc version=3 Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.123 [INFO][4449] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.192 [INFO][4449] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--57f7549777--v6lv7-eth0 calico-apiserver-57f7549777- calico-apiserver 236b218f-d8af-4e9e-b6b6-8f9ea312a2ce 898 0 2026-01-23 01:11:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57f7549777 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-p26ko.gb1.brightbox.com calico-apiserver-57f7549777-v6lv7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califbecf82bb39 [] [] }} ContainerID="375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" Namespace="calico-apiserver" Pod="calico-apiserver-57f7549777-v6lv7" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--57f7549777--v6lv7-" Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.194 [INFO][4449] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" Namespace="calico-apiserver" Pod="calico-apiserver-57f7549777-v6lv7" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--57f7549777--v6lv7-eth0" Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.498 [INFO][4538] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" HandleID="k8s-pod-network.375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" Workload="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--57f7549777--v6lv7-eth0" Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.499 [INFO][4538] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" HandleID="k8s-pod-network.375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" Workload="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--57f7549777--v6lv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001033c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-p26ko.gb1.brightbox.com", "pod":"calico-apiserver-57f7549777-v6lv7", "timestamp":"2026-01-23 01:11:48.498935564 +0000 UTC"}, Hostname:"srv-p26ko.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.501 [INFO][4538] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.501 [INFO][4538] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.501 [INFO][4538] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p26ko.gb1.brightbox.com' Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.530 [INFO][4538] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.552 [INFO][4538] ipam/ipam.go 394: Looking up existing affinities for host host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.569 [INFO][4538] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.577 [INFO][4538] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.583 [INFO][4538] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.584 [INFO][4538] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.588 [INFO][4538] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7 Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.603 [INFO][4538] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.624 [INFO][4538] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.46.133/26] block=192.168.46.128/26 handle="k8s-pod-network.375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.624 [INFO][4538] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.133/26] handle="k8s-pod-network.375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.624 [INFO][4538] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:48.753857 containerd[1589]: 2026-01-23 01:11:48.625 [INFO][4538] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.46.133/26] IPv6=[] ContainerID="375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" HandleID="k8s-pod-network.375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" Workload="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--57f7549777--v6lv7-eth0" Jan 23 01:11:48.756921 containerd[1589]: 2026-01-23 01:11:48.652 [INFO][4449] cni-plugin/k8s.go 418: Populated endpoint ContainerID="375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" Namespace="calico-apiserver" Pod="calico-apiserver-57f7549777-v6lv7" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--57f7549777--v6lv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--57f7549777--v6lv7-eth0", GenerateName:"calico-apiserver-57f7549777-", Namespace:"calico-apiserver", SelfLink:"", UID:"236b218f-d8af-4e9e-b6b6-8f9ea312a2ce", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f7549777", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-57f7549777-v6lv7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califbecf82bb39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:48.756921 containerd[1589]: 2026-01-23 01:11:48.653 [INFO][4449] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.133/32] ContainerID="375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" Namespace="calico-apiserver" Pod="calico-apiserver-57f7549777-v6lv7" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--57f7549777--v6lv7-eth0" Jan 23 01:11:48.756921 containerd[1589]: 2026-01-23 01:11:48.653 [INFO][4449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbecf82bb39 ContainerID="375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" Namespace="calico-apiserver" Pod="calico-apiserver-57f7549777-v6lv7" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--57f7549777--v6lv7-eth0" Jan 23 01:11:48.756921 containerd[1589]: 2026-01-23 01:11:48.686 [INFO][4449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" Namespace="calico-apiserver" Pod="calico-apiserver-57f7549777-v6lv7" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--57f7549777--v6lv7-eth0" Jan 23 01:11:48.756921 containerd[1589]: 2026-01-23 01:11:48.693 [INFO][4449] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" Namespace="calico-apiserver" Pod="calico-apiserver-57f7549777-v6lv7" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--57f7549777--v6lv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--57f7549777--v6lv7-eth0", GenerateName:"calico-apiserver-57f7549777-", Namespace:"calico-apiserver", SelfLink:"", UID:"236b218f-d8af-4e9e-b6b6-8f9ea312a2ce", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f7549777", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7", Pod:"calico-apiserver-57f7549777-v6lv7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califbecf82bb39", MAC:"02:c5:9d:90:da:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:48.756921 containerd[1589]: 2026-01-23 01:11:48.746 [INFO][4449] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" Namespace="calico-apiserver" Pod="calico-apiserver-57f7549777-v6lv7" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--57f7549777--v6lv7-eth0" Jan 23 01:11:48.796051 systemd-networkd[1481]: calic044df27196: Gained IPv6LL Jan 23 01:11:48.801887 containerd[1589]: time="2026-01-23T01:11:48.801814987Z" level=info msg="connecting to shim 375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7" address="unix:///run/containerd/s/79e995ea205606d5cf91d45e5f5a948c1c8f6e3ddeab627b9128ec45bf465333" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:48.832109 systemd[1]: Started cri-containerd-128039cbf8b01e921974666750f70769172e0e2eaadb37e6ba4d261e4da4ef10.scope - libcontainer container 128039cbf8b01e921974666750f70769172e0e2eaadb37e6ba4d261e4da4ef10. Jan 23 01:11:48.840163 systemd-networkd[1481]: cali4c44c731f28: Link UP Jan 23 01:11:48.845183 systemd-networkd[1481]: cali4c44c731f28: Gained carrier Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.157 [INFO][4444] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.294 [INFO][4444] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--774wh-eth0 calico-apiserver-769444c77- calico-apiserver da5b2d2c-13cd-4988-8a1e-436e3c779260 907 0 2026-01-23 01:11:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:769444c77 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-p26ko.gb1.brightbox.com calico-apiserver-769444c77-774wh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4c44c731f28 [] [] }} ContainerID="8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-774wh" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--774wh-" Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.294 [INFO][4444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-774wh" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--774wh-eth0" Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.533 [INFO][4554] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" HandleID="k8s-pod-network.8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" Workload="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--774wh-eth0" Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.536 [INFO][4554] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" HandleID="k8s-pod-network.8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" Workload="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--774wh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003926a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-p26ko.gb1.brightbox.com", "pod":"calico-apiserver-769444c77-774wh", "timestamp":"2026-01-23 01:11:48.533399239 +0000 UTC"}, Hostname:"srv-p26ko.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.541 [INFO][4554] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.625 [INFO][4554] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.626 [INFO][4554] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p26ko.gb1.brightbox.com' Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.653 [INFO][4554] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.718 [INFO][4554] ipam/ipam.go 394: Looking up existing affinities for host host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.751 [INFO][4554] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.755 [INFO][4554] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.759 [INFO][4554] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.760 [INFO][4554] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.766 [INFO][4554] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.780 [INFO][4554] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.810 [INFO][4554] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.46.134/26] block=192.168.46.128/26 handle="k8s-pod-network.8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.810 [INFO][4554] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.134/26] handle="k8s-pod-network.8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.810 [INFO][4554] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:48.916797 containerd[1589]: 2026-01-23 01:11:48.811 [INFO][4554] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.46.134/26] IPv6=[] ContainerID="8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" HandleID="k8s-pod-network.8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" Workload="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--774wh-eth0" Jan 23 01:11:48.919879 containerd[1589]: 2026-01-23 01:11:48.826 [INFO][4444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-774wh" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--774wh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--774wh-eth0", GenerateName:"calico-apiserver-769444c77-", Namespace:"calico-apiserver", SelfLink:"", UID:"da5b2d2c-13cd-4988-8a1e-436e3c779260", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769444c77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-769444c77-774wh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c44c731f28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:48.919879 containerd[1589]: 2026-01-23 01:11:48.827 [INFO][4444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.134/32] ContainerID="8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-774wh" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--774wh-eth0" Jan 23 01:11:48.919879 containerd[1589]: 2026-01-23 01:11:48.827 [INFO][4444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c44c731f28 ContainerID="8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-774wh" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--774wh-eth0" Jan 23 01:11:48.919879 containerd[1589]: 2026-01-23 01:11:48.848 [INFO][4444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-774wh" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--774wh-eth0" Jan 23 01:11:48.919879 containerd[1589]: 2026-01-23 01:11:48.857 [INFO][4444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-774wh" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--774wh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--774wh-eth0", GenerateName:"calico-apiserver-769444c77-", Namespace:"calico-apiserver", SelfLink:"", UID:"da5b2d2c-13cd-4988-8a1e-436e3c779260", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769444c77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e", Pod:"calico-apiserver-769444c77-774wh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c44c731f28", MAC:"d2:b5:16:e9:58:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:48.919879 containerd[1589]: 2026-01-23 01:11:48.895 [INFO][4444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" Namespace="calico-apiserver" Pod="calico-apiserver-769444c77-774wh" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-calico--apiserver--769444c77--774wh-eth0" Jan 23 01:11:48.981130 containerd[1589]: time="2026-01-23T01:11:48.980780405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769444c77-6h5s4,Uid:d38a34ac-d16c-44a2-b363-28d164fb855d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5a3542ac4c01b29e0aa30b51900c99cbb5242830d4e8d6edc10ffca889393d0d\"" Jan 23 01:11:48.986204 containerd[1589]: time="2026-01-23T01:11:48.985773697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:11:49.020745 systemd-networkd[1481]: cali1bb6652e96f: Link UP Jan 23 01:11:49.028600 systemd-networkd[1481]: cali1bb6652e96f: Gained carrier Jan 23 01:11:49.077763 kubelet[2899]: E0123 01:11:49.077671 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f4t5v" podUID="6ae36994-0284-456d-8619-5a1f2ff25c95" Jan 23 01:11:49.105109 containerd[1589]: time="2026-01-23T01:11:49.104901741Z" level=info msg="connecting to shim 8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e" address="unix:///run/containerd/s/f642b1d568478d600243662d82309ecbe6edda42fff9be5c9aa47ed9b6d614ee" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.363 [INFO][4481] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.394 [INFO][4481] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p26ko.gb1.brightbox.com-k8s-whisker--5b94489fd9--glnbg-eth0 whisker-5b94489fd9- calico-system ed71740a-4cd8-4c4d-959e-402af8a98785 1005 0 2026-01-23 01:11:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b94489fd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-p26ko.gb1.brightbox.com whisker-5b94489fd9-glnbg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1bb6652e96f [] [] }} ContainerID="4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" Namespace="calico-system" Pod="whisker-5b94489fd9-glnbg" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-whisker--5b94489fd9--glnbg-" Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.396 [INFO][4481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" Namespace="calico-system" Pod="whisker-5b94489fd9-glnbg" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-whisker--5b94489fd9--glnbg-eth0" Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.613 [INFO][4571] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" HandleID="k8s-pod-network.4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" Workload="srv--p26ko.gb1.brightbox.com-k8s-whisker--5b94489fd9--glnbg-eth0" Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.616 [INFO][4571] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" HandleID="k8s-pod-network.4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" Workload="srv--p26ko.gb1.brightbox.com-k8s-whisker--5b94489fd9--glnbg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005e2af0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-p26ko.gb1.brightbox.com", "pod":"whisker-5b94489fd9-glnbg", "timestamp":"2026-01-23 01:11:48.613189526 +0000 UTC"}, Hostname:"srv-p26ko.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.616 [INFO][4571] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.810 [INFO][4571] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.812 [INFO][4571] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p26ko.gb1.brightbox.com' Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.879 [INFO][4571] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.916 [INFO][4571] ipam/ipam.go 394: Looking up existing affinities for host host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.935 [INFO][4571] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.939 [INFO][4571] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.942 [INFO][4571] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.943 [INFO][4571] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.948 [INFO][4571] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669 Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.959 [INFO][4571] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.984 [INFO][4571] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.46.135/26] block=192.168.46.128/26 handle="k8s-pod-network.4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.985 [INFO][4571] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.135/26] handle="k8s-pod-network.4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.986 [INFO][4571] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:49.112977 containerd[1589]: 2026-01-23 01:11:48.986 [INFO][4571] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.46.135/26] IPv6=[] ContainerID="4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" HandleID="k8s-pod-network.4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" Workload="srv--p26ko.gb1.brightbox.com-k8s-whisker--5b94489fd9--glnbg-eth0" Jan 23 01:11:49.116321 containerd[1589]: 2026-01-23 01:11:48.996 [INFO][4481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" Namespace="calico-system" Pod="whisker-5b94489fd9-glnbg" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-whisker--5b94489fd9--glnbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-whisker--5b94489fd9--glnbg-eth0", GenerateName:"whisker-5b94489fd9-", Namespace:"calico-system", SelfLink:"", UID:"ed71740a-4cd8-4c4d-959e-402af8a98785", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b94489fd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"", Pod:"whisker-5b94489fd9-glnbg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.46.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1bb6652e96f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:49.116321 containerd[1589]: 2026-01-23 01:11:48.996 [INFO][4481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.135/32] ContainerID="4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" Namespace="calico-system" Pod="whisker-5b94489fd9-glnbg" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-whisker--5b94489fd9--glnbg-eth0" Jan 23 01:11:49.116321 containerd[1589]: 2026-01-23 01:11:48.996 [INFO][4481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bb6652e96f ContainerID="4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" Namespace="calico-system" Pod="whisker-5b94489fd9-glnbg" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-whisker--5b94489fd9--glnbg-eth0" Jan 23 01:11:49.116321 containerd[1589]: 2026-01-23 01:11:49.030 [INFO][4481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" Namespace="calico-system" Pod="whisker-5b94489fd9-glnbg" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-whisker--5b94489fd9--glnbg-eth0" Jan 23 01:11:49.116321 containerd[1589]: 2026-01-23 01:11:49.041 [INFO][4481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" Namespace="calico-system" Pod="whisker-5b94489fd9-glnbg" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-whisker--5b94489fd9--glnbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-whisker--5b94489fd9--glnbg-eth0", GenerateName:"whisker-5b94489fd9-", Namespace:"calico-system", SelfLink:"", UID:"ed71740a-4cd8-4c4d-959e-402af8a98785", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b94489fd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669", Pod:"whisker-5b94489fd9-glnbg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.46.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1bb6652e96f", MAC:"32:e4:2a:97:f2:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:49.116321 containerd[1589]: 2026-01-23 01:11:49.066 [INFO][4481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" Namespace="calico-system" Pod="whisker-5b94489fd9-glnbg" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-whisker--5b94489fd9--glnbg-eth0" Jan 23 01:11:49.188897 systemd[1]: Started cri-containerd-8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e.scope - libcontainer container 8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e. Jan 23 01:11:49.222023 systemd[1]: Started cri-containerd-375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7.scope - libcontainer container 375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7. Jan 23 01:11:49.236143 containerd[1589]: time="2026-01-23T01:11:49.235745805Z" level=info msg="StartContainer for \"128039cbf8b01e921974666750f70769172e0e2eaadb37e6ba4d261e4da4ef10\" returns successfully" Jan 23 01:11:49.243584 systemd-networkd[1481]: cali34a6a1806bc: Gained IPv6LL Jan 23 01:11:49.261864 containerd[1589]: time="2026-01-23T01:11:49.261730499Z" level=info msg="connecting to shim 4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669" address="unix:///run/containerd/s/4c28da149ed6c7b0abea7ef70b82b76f09daa9b780baeb26a667765c65decf78" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:49.335824 systemd[1]: Started cri-containerd-4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669.scope - libcontainer container 4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669. Jan 23 01:11:49.360506 containerd[1589]: time="2026-01-23T01:11:49.360238625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:49.363841 containerd[1589]: time="2026-01-23T01:11:49.363198150Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:11:49.364032 containerd[1589]: time="2026-01-23T01:11:49.363636246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:11:49.364638 kubelet[2899]: E0123 01:11:49.364574 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:11:49.364990 kubelet[2899]: E0123 01:11:49.364650 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:11:49.364990 kubelet[2899]: E0123 01:11:49.364802 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-769444c77-6h5s4_calico-apiserver(d38a34ac-d16c-44a2-b363-28d164fb855d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:49.364990 kubelet[2899]: E0123 01:11:49.364867 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-6h5s4" podUID="d38a34ac-d16c-44a2-b363-28d164fb855d" Jan 23 01:11:49.498673 systemd-networkd[1481]: cali8881978a645: Gained IPv6LL Jan 23 01:11:49.745272 containerd[1589]: time="2026-01-23T01:11:49.745072652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769444c77-774wh,Uid:da5b2d2c-13cd-4988-8a1e-436e3c779260,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8ab23b4ed7d46421b29be154abc7bf3f933aaddd646c3ebd5a3b387d71ba723e\"" Jan 23 01:11:49.754785 systemd-networkd[1481]: califbecf82bb39: Gained IPv6LL Jan 23 01:11:49.756699 containerd[1589]: time="2026-01-23T01:11:49.754991110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:11:49.788291 containerd[1589]: time="2026-01-23T01:11:49.788243486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b94489fd9-glnbg,Uid:ed71740a-4cd8-4c4d-959e-402af8a98785,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e0b4e3511303f9f6955117c2216d9aa74c336637eb70b9b4e8e44b23429d669\"" Jan 23 01:11:49.834474 containerd[1589]: time="2026-01-23T01:11:49.834059116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f7549777-v6lv7,Uid:236b218f-d8af-4e9e-b6b6-8f9ea312a2ce,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"375d8ec92ebb69c116eb239d7821911acb8308b7ddc0310b044dbffcaff55ff7\"" Jan 23 01:11:50.078418 containerd[1589]: time="2026-01-23T01:11:50.078227652Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:50.079846 containerd[1589]: time="2026-01-23T01:11:50.079687296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:11:50.079846 containerd[1589]: time="2026-01-23T01:11:50.079805501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:11:50.080215 kubelet[2899]: E0123 01:11:50.080154 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:11:50.080848 kubelet[2899]: E0123 01:11:50.080228 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:11:50.083986 kubelet[2899]: E0123 01:11:50.081347 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-769444c77-774wh_calico-apiserver(da5b2d2c-13cd-4988-8a1e-436e3c779260): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:50.083986 kubelet[2899]: E0123 01:11:50.082943 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-774wh" podUID="da5b2d2c-13cd-4988-8a1e-436e3c779260" Jan 23 01:11:50.085712 containerd[1589]: time="2026-01-23T01:11:50.083205870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:11:50.138367 kubelet[2899]: E0123 01:11:50.138294 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-774wh" podUID="da5b2d2c-13cd-4988-8a1e-436e3c779260" Jan 23 01:11:50.146486 kubelet[2899]: E0123 01:11:50.146418 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-6h5s4" podUID="d38a34ac-d16c-44a2-b363-28d164fb855d" Jan 23 01:11:50.149545 kubelet[2899]: E0123 01:11:50.149416 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f4t5v" podUID="6ae36994-0284-456d-8619-5a1f2ff25c95" Jan 23 01:11:50.332534 systemd-networkd[1481]: cali4c44c731f28: Gained IPv6LL Jan 23 01:11:50.391407 kubelet[2899]: I0123 01:11:50.380405 2899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tz2wv" podStartSLOduration=58.361846101 podStartE2EDuration="58.361846101s" podCreationTimestamp="2026-01-23 01:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:11:50.298768809 +0000 UTC m=+62.947357084" watchObservedRunningTime="2026-01-23 01:11:50.361846101 +0000 UTC m=+63.010434358" Jan 23 01:11:50.393003 containerd[1589]: time="2026-01-23T01:11:50.392937114Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:50.394289 containerd[1589]: time="2026-01-23T01:11:50.394232524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:11:50.394408 containerd[1589]: time="2026-01-23T01:11:50.394360410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:11:50.395904 kubelet[2899]: E0123 01:11:50.394938 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:11:50.395904 kubelet[2899]: E0123 01:11:50.395020 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:11:50.395904 kubelet[2899]: E0123 01:11:50.395331 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b94489fd9-glnbg_calico-system(ed71740a-4cd8-4c4d-959e-402af8a98785): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:50.396754 containerd[1589]: time="2026-01-23T01:11:50.396637533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:11:50.523705 systemd-networkd[1481]: cali1bb6652e96f: Gained IPv6LL Jan 23 01:11:50.679729 systemd-networkd[1481]: vxlan.calico: Link UP Jan 23 01:11:50.679741 systemd-networkd[1481]: vxlan.calico: Gained carrier Jan 23 01:11:50.713162 containerd[1589]: time="2026-01-23T01:11:50.713113310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:50.715055 containerd[1589]: time="2026-01-23T01:11:50.714988469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:11:50.715568 containerd[1589]: time="2026-01-23T01:11:50.715136602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:11:50.717419 kubelet[2899]: E0123 01:11:50.715701 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:11:50.717419 kubelet[2899]: E0123 01:11:50.715763 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:11:50.717419 kubelet[2899]: E0123 01:11:50.716048 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57f7549777-v6lv7_calico-apiserver(236b218f-d8af-4e9e-b6b6-8f9ea312a2ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:50.717419 kubelet[2899]: E0123 01:11:50.716114 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57f7549777-v6lv7" podUID="236b218f-d8af-4e9e-b6b6-8f9ea312a2ce" Jan 23 01:11:50.718534 containerd[1589]: time="2026-01-23T01:11:50.718024286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:11:51.027125 containerd[1589]: time="2026-01-23T01:11:51.026979208Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:51.033710 containerd[1589]: time="2026-01-23T01:11:51.033498532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:11:51.033710 containerd[1589]: time="2026-01-23T01:11:51.033668095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:11:51.034174 kubelet[2899]: E0123 01:11:51.034112 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:11:51.034438 kubelet[2899]: E0123 01:11:51.034190 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:11:51.034438 kubelet[2899]: E0123 01:11:51.034311 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b94489fd9-glnbg_calico-system(ed71740a-4cd8-4c4d-959e-402af8a98785): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:51.034749 kubelet[2899]: E0123 01:11:51.034554 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b94489fd9-glnbg" podUID="ed71740a-4cd8-4c4d-959e-402af8a98785" Jan 23 01:11:51.150506 kubelet[2899]: E0123 01:11:51.150135 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57f7549777-v6lv7" podUID="236b218f-d8af-4e9e-b6b6-8f9ea312a2ce" Jan 23 01:11:51.153425 kubelet[2899]: E0123 01:11:51.153306 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b94489fd9-glnbg" podUID="ed71740a-4cd8-4c4d-959e-402af8a98785" Jan 23 01:11:51.154055 kubelet[2899]: E0123 01:11:51.153782 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-774wh" podUID="da5b2d2c-13cd-4988-8a1e-436e3c779260" Jan 23 01:11:52.058821 systemd-networkd[1481]: vxlan.calico: Gained IPv6LL Jan 23 01:11:57.623927 containerd[1589]: time="2026-01-23T01:11:57.623834189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rpxb6,Uid:36b120e1-773f-44d0-abdd-d8ef5044f795,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:57.806421 systemd-networkd[1481]: calife41df14be6: Link UP Jan 23 01:11:57.807349 systemd-networkd[1481]: calife41df14be6: Gained carrier Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.691 [INFO][4928] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--rpxb6-eth0 coredns-66bc5c9577- kube-system 36b120e1-773f-44d0-abdd-d8ef5044f795 904 0 2026-01-23 01:10:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-p26ko.gb1.brightbox.com coredns-66bc5c9577-rpxb6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calife41df14be6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" Namespace="kube-system" Pod="coredns-66bc5c9577-rpxb6" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--rpxb6-" Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.691 [INFO][4928] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" Namespace="kube-system" Pod="coredns-66bc5c9577-rpxb6" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--rpxb6-eth0" Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.738 [INFO][4940] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" HandleID="k8s-pod-network.e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" Workload="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--rpxb6-eth0" Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.738 [INFO][4940] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" HandleID="k8s-pod-network.e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" Workload="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--rpxb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-p26ko.gb1.brightbox.com", "pod":"coredns-66bc5c9577-rpxb6", "timestamp":"2026-01-23 01:11:57.738095999 +0000 UTC"}, Hostname:"srv-p26ko.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.738 [INFO][4940] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.738 [INFO][4940] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.739 [INFO][4940] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p26ko.gb1.brightbox.com' Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.752 [INFO][4940] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.759 [INFO][4940] ipam/ipam.go 394: Looking up existing affinities for host host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.766 [INFO][4940] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.769 [INFO][4940] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.772 [INFO][4940] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.772 [INFO][4940] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.774 [INFO][4940] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8 Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.780 [INFO][4940] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.791 [INFO][4940] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.46.136/26] block=192.168.46.128/26 handle="k8s-pod-network.e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.791 [INFO][4940] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.136/26] handle="k8s-pod-network.e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.791 [INFO][4940] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:57.842723 containerd[1589]: 2026-01-23 01:11:57.791 [INFO][4940] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.46.136/26] IPv6=[] ContainerID="e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" HandleID="k8s-pod-network.e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" Workload="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--rpxb6-eth0" Jan 23 01:11:57.844999 containerd[1589]: 2026-01-23 01:11:57.797 [INFO][4928] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" Namespace="kube-system" Pod="coredns-66bc5c9577-rpxb6" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--rpxb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--rpxb6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"36b120e1-773f-44d0-abdd-d8ef5044f795", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-rpxb6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife41df14be6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:57.844999 containerd[1589]: 2026-01-23 01:11:57.798 [INFO][4928] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.136/32] ContainerID="e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" Namespace="kube-system" Pod="coredns-66bc5c9577-rpxb6" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--rpxb6-eth0" Jan 23 01:11:57.844999 containerd[1589]: 2026-01-23 01:11:57.798 [INFO][4928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife41df14be6 ContainerID="e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" Namespace="kube-system" Pod="coredns-66bc5c9577-rpxb6" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--rpxb6-eth0" Jan 23 01:11:57.844999 containerd[1589]: 2026-01-23 01:11:57.811 [INFO][4928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" Namespace="kube-system" Pod="coredns-66bc5c9577-rpxb6" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--rpxb6-eth0" Jan 23 01:11:57.844999 containerd[1589]: 2026-01-23 01:11:57.811 [INFO][4928] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" Namespace="kube-system" Pod="coredns-66bc5c9577-rpxb6" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--rpxb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--rpxb6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"36b120e1-773f-44d0-abdd-d8ef5044f795", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8", Pod:"coredns-66bc5c9577-rpxb6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife41df14be6", MAC:"5e:b4:90:d8:d8:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:57.845351 containerd[1589]: 2026-01-23 01:11:57.827 [INFO][4928] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" Namespace="kube-system" Pod="coredns-66bc5c9577-rpxb6" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-coredns--66bc5c9577--rpxb6-eth0" Jan 23 01:11:57.890357 containerd[1589]: time="2026-01-23T01:11:57.889742553Z" level=info msg="connecting to shim e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8" address="unix:///run/containerd/s/e9333a452685c38b47b6ec23d205f58463aba71f2c3a52fb2351f3f4c7fe1044" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:57.959673 systemd[1]: Started cri-containerd-e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8.scope - libcontainer container e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8. Jan 23 01:11:58.033690 containerd[1589]: time="2026-01-23T01:11:58.033621010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rpxb6,Uid:36b120e1-773f-44d0-abdd-d8ef5044f795,Namespace:kube-system,Attempt:0,} returns sandbox id \"e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8\"" Jan 23 01:11:58.047435 containerd[1589]: time="2026-01-23T01:11:58.047267145Z" level=info msg="CreateContainer within sandbox \"e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:11:58.056190 containerd[1589]: time="2026-01-23T01:11:58.056146592Z" level=info msg="Container aac5b8af87dc9dab4870e8c2192b48536c2e6553af3e0407c00d87564ac3ba07: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:58.067049 containerd[1589]: time="2026-01-23T01:11:58.066891392Z" level=info msg="CreateContainer within sandbox \"e08e7ac3c7b24e0606bc8cbed557eace181debb93a0f680fe49f68fe7be8e9b8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aac5b8af87dc9dab4870e8c2192b48536c2e6553af3e0407c00d87564ac3ba07\"" Jan 23 01:11:58.068610 containerd[1589]: time="2026-01-23T01:11:58.068543028Z" level=info msg="StartContainer for \"aac5b8af87dc9dab4870e8c2192b48536c2e6553af3e0407c00d87564ac3ba07\"" Jan 23 01:11:58.069787 containerd[1589]: time="2026-01-23T01:11:58.069750217Z" level=info msg="connecting to shim aac5b8af87dc9dab4870e8c2192b48536c2e6553af3e0407c00d87564ac3ba07" address="unix:///run/containerd/s/e9333a452685c38b47b6ec23d205f58463aba71f2c3a52fb2351f3f4c7fe1044" protocol=ttrpc version=3 Jan 23 01:11:58.097782 systemd[1]: Started cri-containerd-aac5b8af87dc9dab4870e8c2192b48536c2e6553af3e0407c00d87564ac3ba07.scope - libcontainer container aac5b8af87dc9dab4870e8c2192b48536c2e6553af3e0407c00d87564ac3ba07. Jan 23 01:11:58.145955 containerd[1589]: time="2026-01-23T01:11:58.145456802Z" level=info msg="StartContainer for \"aac5b8af87dc9dab4870e8c2192b48536c2e6553af3e0407c00d87564ac3ba07\" returns successfully" Jan 23 01:11:58.212798 kubelet[2899]: I0123 01:11:58.212705 2899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rpxb6" podStartSLOduration=66.212675132 podStartE2EDuration="1m6.212675132s" podCreationTimestamp="2026-01-23 01:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:11:58.211832784 +0000 UTC m=+70.860421067" watchObservedRunningTime="2026-01-23 01:11:58.212675132 +0000 UTC m=+70.861263392" Jan 23 01:11:58.622785 containerd[1589]: time="2026-01-23T01:11:58.622713593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2q95q,Uid:ac789593-88de-4afb-9cdb-f9323fe8cb8a,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:58.790281 systemd-networkd[1481]: calibd419a2b650: Link UP Jan 23 01:11:58.791831 systemd-networkd[1481]: calibd419a2b650: Gained carrier Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.688 [INFO][5040] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p26ko.gb1.brightbox.com-k8s-csi--node--driver--2q95q-eth0 csi-node-driver- calico-system ac789593-88de-4afb-9cdb-f9323fe8cb8a 779 0 2026-01-23 01:11:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-p26ko.gb1.brightbox.com csi-node-driver-2q95q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibd419a2b650 [] [] }} ContainerID="58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" Namespace="calico-system" Pod="csi-node-driver-2q95q" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-csi--node--driver--2q95q-" Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.689 [INFO][5040] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" Namespace="calico-system" Pod="csi-node-driver-2q95q" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-csi--node--driver--2q95q-eth0" Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.732 [INFO][5052] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" HandleID="k8s-pod-network.58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" Workload="srv--p26ko.gb1.brightbox.com-k8s-csi--node--driver--2q95q-eth0" Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.732 [INFO][5052] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" HandleID="k8s-pod-network.58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" Workload="srv--p26ko.gb1.brightbox.com-k8s-csi--node--driver--2q95q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5770), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-p26ko.gb1.brightbox.com", "pod":"csi-node-driver-2q95q", "timestamp":"2026-01-23 01:11:58.732795862 +0000 UTC"}, Hostname:"srv-p26ko.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.733 [INFO][5052] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.733 [INFO][5052] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.733 [INFO][5052] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p26ko.gb1.brightbox.com' Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.744 [INFO][5052] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.753 [INFO][5052] ipam/ipam.go 394: Looking up existing affinities for host host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.760 [INFO][5052] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.762 [INFO][5052] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.766 [INFO][5052] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.766 [INFO][5052] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.769 [INFO][5052] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9 Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.774 [INFO][5052] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.781 [INFO][5052] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.46.137/26] block=192.168.46.128/26 handle="k8s-pod-network.58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.782 [INFO][5052] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.137/26] handle="k8s-pod-network.58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" host="srv-p26ko.gb1.brightbox.com" Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.782 [INFO][5052] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:58.816510 containerd[1589]: 2026-01-23 01:11:58.782 [INFO][5052] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.46.137/26] IPv6=[] ContainerID="58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" HandleID="k8s-pod-network.58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" Workload="srv--p26ko.gb1.brightbox.com-k8s-csi--node--driver--2q95q-eth0" Jan 23 01:11:58.821125 containerd[1589]: 2026-01-23 01:11:58.785 [INFO][5040] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" Namespace="calico-system" Pod="csi-node-driver-2q95q" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-csi--node--driver--2q95q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-csi--node--driver--2q95q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ac789593-88de-4afb-9cdb-f9323fe8cb8a", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-2q95q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd419a2b650", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:58.821125 containerd[1589]: 2026-01-23 01:11:58.785 [INFO][5040] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.137/32] ContainerID="58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" Namespace="calico-system" Pod="csi-node-driver-2q95q" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-csi--node--driver--2q95q-eth0" Jan 23 01:11:58.821125 containerd[1589]: 2026-01-23 01:11:58.785 [INFO][5040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd419a2b650 ContainerID="58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" Namespace="calico-system" Pod="csi-node-driver-2q95q" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-csi--node--driver--2q95q-eth0" Jan 23 01:11:58.821125 containerd[1589]: 2026-01-23 01:11:58.793 [INFO][5040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" Namespace="calico-system" Pod="csi-node-driver-2q95q" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-csi--node--driver--2q95q-eth0" Jan 23 01:11:58.821125 containerd[1589]: 2026-01-23 01:11:58.794 [INFO][5040] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" Namespace="calico-system" Pod="csi-node-driver-2q95q" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-csi--node--driver--2q95q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p26ko.gb1.brightbox.com-k8s-csi--node--driver--2q95q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ac789593-88de-4afb-9cdb-f9323fe8cb8a", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p26ko.gb1.brightbox.com", ContainerID:"58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9", Pod:"csi-node-driver-2q95q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd419a2b650", MAC:"ce:72:57:fc:30:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:58.821125 containerd[1589]: 2026-01-23 01:11:58.812 [INFO][5040] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" Namespace="calico-system" Pod="csi-node-driver-2q95q" WorkloadEndpoint="srv--p26ko.gb1.brightbox.com-k8s-csi--node--driver--2q95q-eth0" Jan 23 01:11:58.843213 containerd[1589]: time="2026-01-23T01:11:58.843161163Z" level=info msg="connecting to shim 58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9" address="unix:///run/containerd/s/d5df75a0fe3e7a64c56282892614c54c9066466c9271e0964ea98e0644d8e59d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:58.895781 systemd[1]: Started cri-containerd-58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9.scope - libcontainer container 58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9. Jan 23 01:11:58.945061 containerd[1589]: time="2026-01-23T01:11:58.945017056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2q95q,Uid:ac789593-88de-4afb-9cdb-f9323fe8cb8a,Namespace:calico-system,Attempt:0,} returns sandbox id \"58c969cc00e2a3b0246031efffb644822e9c0e372933d82c5bee86af393c6eb9\"" Jan 23 01:11:58.947273 containerd[1589]: time="2026-01-23T01:11:58.947224743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:11:59.252672 containerd[1589]: time="2026-01-23T01:11:59.252245120Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:59.253948 containerd[1589]: time="2026-01-23T01:11:59.253887187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:11:59.254402 containerd[1589]: time="2026-01-23T01:11:59.254017533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:11:59.254490 kubelet[2899]: E0123 01:11:59.254196 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:59.254490 kubelet[2899]: E0123 01:11:59.254255 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:59.254490 kubelet[2899]: E0123 01:11:59.254358 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-2q95q_calico-system(ac789593-88de-4afb-9cdb-f9323fe8cb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:59.256993 containerd[1589]: time="2026-01-23T01:11:59.256867261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:11:59.354638 systemd-networkd[1481]: calife41df14be6: Gained IPv6LL Jan 23 01:11:59.570623 containerd[1589]: time="2026-01-23T01:11:59.570551182Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:59.571895 containerd[1589]: time="2026-01-23T01:11:59.571821058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:11:59.572006 containerd[1589]: time="2026-01-23T01:11:59.571968952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:11:59.572301 kubelet[2899]: E0123 01:11:59.572237 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:59.572413 kubelet[2899]: E0123 01:11:59.572346 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:59.573339 kubelet[2899]: E0123 01:11:59.572996 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-2q95q_calico-system(ac789593-88de-4afb-9cdb-f9323fe8cb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:59.573339 kubelet[2899]: E0123 01:11:59.573067 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:12:00.196467 kubelet[2899]: E0123 01:12:00.196313 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:12:00.250667 systemd-networkd[1481]: calibd419a2b650: Gained IPv6LL Jan 23 01:12:00.621605 containerd[1589]: time="2026-01-23T01:12:00.621537253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:12:00.961233 containerd[1589]: time="2026-01-23T01:12:00.961035105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:00.962661 containerd[1589]: time="2026-01-23T01:12:00.962610695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:12:00.962760 containerd[1589]: time="2026-01-23T01:12:00.962725387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:12:00.963235 kubelet[2899]: E0123 01:12:00.962950 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:12:00.963235 kubelet[2899]: E0123 01:12:00.963022 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:12:00.963235 kubelet[2899]: E0123 01:12:00.963137 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-55994859c6-2x5qp_calico-system(bc145d36-eea8-4680-ac11-0b79793cc035): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:00.963235 kubelet[2899]: E0123 01:12:00.963187 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" podUID="bc145d36-eea8-4680-ac11-0b79793cc035" Jan 23 01:12:04.622409 containerd[1589]: time="2026-01-23T01:12:04.621937900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:12:04.958422 containerd[1589]: time="2026-01-23T01:12:04.958161209Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:04.960474 containerd[1589]: time="2026-01-23T01:12:04.960358266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:12:04.960474 containerd[1589]: time="2026-01-23T01:12:04.960431634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:04.960941 kubelet[2899]: E0123 01:12:04.960876 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:12:04.961432 kubelet[2899]: E0123 01:12:04.960956 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:12:04.961432 kubelet[2899]: E0123 01:12:04.961226 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-f4t5v_calico-system(6ae36994-0284-456d-8619-5a1f2ff25c95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:04.962729 containerd[1589]: time="2026-01-23T01:12:04.961624015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:12:04.969702 kubelet[2899]: E0123 01:12:04.961287 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f4t5v" podUID="6ae36994-0284-456d-8619-5a1f2ff25c95" Jan 23 01:12:05.274260 containerd[1589]: time="2026-01-23T01:12:05.274182943Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:05.275536 containerd[1589]: time="2026-01-23T01:12:05.275472713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:12:05.275640 containerd[1589]: time="2026-01-23T01:12:05.275576928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:05.275896 kubelet[2899]: E0123 01:12:05.275832 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:05.276797 kubelet[2899]: E0123 01:12:05.275903 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:05.276797 kubelet[2899]: E0123 01:12:05.276033 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57f7549777-v6lv7_calico-apiserver(236b218f-d8af-4e9e-b6b6-8f9ea312a2ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:05.276797 kubelet[2899]: E0123 01:12:05.276089 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57f7549777-v6lv7" podUID="236b218f-d8af-4e9e-b6b6-8f9ea312a2ce" Jan 23 01:12:05.623736 containerd[1589]: time="2026-01-23T01:12:05.623162150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:12:05.939590 containerd[1589]: time="2026-01-23T01:12:05.939232739Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:05.940813 containerd[1589]: time="2026-01-23T01:12:05.940658495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:12:05.940813 containerd[1589]: time="2026-01-23T01:12:05.940765552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:05.941265 kubelet[2899]: E0123 01:12:05.941070 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:05.941265 kubelet[2899]: E0123 01:12:05.941154 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:05.941376 kubelet[2899]: E0123 01:12:05.941322 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-769444c77-6h5s4_calico-apiserver(d38a34ac-d16c-44a2-b363-28d164fb855d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:05.941464 kubelet[2899]: E0123 01:12:05.941417 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-6h5s4" podUID="d38a34ac-d16c-44a2-b363-28d164fb855d" Jan 23 01:12:06.622267 containerd[1589]: time="2026-01-23T01:12:06.621858979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:12:06.937940 containerd[1589]: time="2026-01-23T01:12:06.937743362Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:06.939870 containerd[1589]: time="2026-01-23T01:12:06.939731499Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:12:06.939870 containerd[1589]: time="2026-01-23T01:12:06.939772784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:06.940406 kubelet[2899]: E0123 01:12:06.940311 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:06.941243 kubelet[2899]: E0123 01:12:06.940572 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:06.942796 kubelet[2899]: E0123 01:12:06.941564 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-769444c77-774wh_calico-apiserver(da5b2d2c-13cd-4988-8a1e-436e3c779260): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:06.942796 kubelet[2899]: E0123 01:12:06.941624 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-774wh" podUID="da5b2d2c-13cd-4988-8a1e-436e3c779260" Jan 23 01:12:06.943405 containerd[1589]: time="2026-01-23T01:12:06.943188165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:12:07.284690 containerd[1589]: time="2026-01-23T01:12:07.284629953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:07.285876 containerd[1589]: time="2026-01-23T01:12:07.285798476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:12:07.286429 containerd[1589]: time="2026-01-23T01:12:07.285900445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:12:07.286502 kubelet[2899]: E0123 01:12:07.286185 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:12:07.286502 kubelet[2899]: E0123 01:12:07.286250 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:12:07.287288 kubelet[2899]: E0123 01:12:07.286362 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b94489fd9-glnbg_calico-system(ed71740a-4cd8-4c4d-959e-402af8a98785): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:07.289179 containerd[1589]: time="2026-01-23T01:12:07.289080961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:12:07.608266 containerd[1589]: time="2026-01-23T01:12:07.607964805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:07.609896 containerd[1589]: time="2026-01-23T01:12:07.609821510Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:12:07.610124 containerd[1589]: time="2026-01-23T01:12:07.609910307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:12:07.610546 kubelet[2899]: E0123 01:12:07.610479 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:12:07.610627 kubelet[2899]: E0123 01:12:07.610561 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:12:07.610736 kubelet[2899]: E0123 01:12:07.610667 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b94489fd9-glnbg_calico-system(ed71740a-4cd8-4c4d-959e-402af8a98785): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:07.611017 kubelet[2899]: E0123 01:12:07.610767 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b94489fd9-glnbg" podUID="ed71740a-4cd8-4c4d-959e-402af8a98785" Jan 23 01:12:11.626152 containerd[1589]: time="2026-01-23T01:12:11.626067963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:12:11.935013 containerd[1589]: time="2026-01-23T01:12:11.934789382Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:11.936402 containerd[1589]: time="2026-01-23T01:12:11.936315300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:12:11.936529 containerd[1589]: time="2026-01-23T01:12:11.936507080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:12:11.937426 kubelet[2899]: E0123 01:12:11.936822 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:12:11.937426 kubelet[2899]: E0123 01:12:11.936892 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:12:11.937426 kubelet[2899]: E0123 01:12:11.937014 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-2q95q_calico-system(ac789593-88de-4afb-9cdb-f9323fe8cb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:11.939641 containerd[1589]: time="2026-01-23T01:12:11.939591855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:12:12.251177 containerd[1589]: time="2026-01-23T01:12:12.251010356Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:12.252548 containerd[1589]: time="2026-01-23T01:12:12.252482814Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:12:12.252735 containerd[1589]: time="2026-01-23T01:12:12.252564585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:12:12.252815 kubelet[2899]: E0123 01:12:12.252757 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:12:12.252921 kubelet[2899]: E0123 01:12:12.252825 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:12:12.252972 kubelet[2899]: E0123 01:12:12.252942 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-2q95q_calico-system(ac789593-88de-4afb-9cdb-f9323fe8cb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:12.253093 kubelet[2899]: E0123 01:12:12.253001 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:12:15.621918 kubelet[2899]: E0123 01:12:15.621741 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f4t5v" podUID="6ae36994-0284-456d-8619-5a1f2ff25c95" Jan 23 01:12:16.621701 kubelet[2899]: E0123 01:12:16.621534 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57f7549777-v6lv7" podUID="236b218f-d8af-4e9e-b6b6-8f9ea312a2ce" Jan 23 01:12:16.623336 kubelet[2899]: E0123 01:12:16.622200 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" podUID="bc145d36-eea8-4680-ac11-0b79793cc035" Jan 23 01:12:18.622073 kubelet[2899]: E0123 01:12:18.621919 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b94489fd9-glnbg" podUID="ed71740a-4cd8-4c4d-959e-402af8a98785" Jan 23 01:12:19.627497 kubelet[2899]: E0123 01:12:19.627411 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-6h5s4" podUID="d38a34ac-d16c-44a2-b363-28d164fb855d" Jan 23 01:12:20.620837 kubelet[2899]: E0123 01:12:20.620715 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-774wh" podUID="da5b2d2c-13cd-4988-8a1e-436e3c779260" Jan 23 01:12:23.622490 kubelet[2899]: E0123 01:12:23.621955 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:12:23.701995 systemd[1]: Started sshd@9-10.230.15.178:22-20.161.92.111:33212.service - OpenSSH per-connection server daemon (20.161.92.111:33212). Jan 23 01:12:24.360007 sshd[5177]: Accepted publickey for core from 20.161.92.111 port 33212 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:12:24.362656 sshd-session[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:24.374082 systemd-logind[1569]: New session 12 of user core. Jan 23 01:12:24.382482 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:12:25.585344 sshd[5180]: Connection closed by 20.161.92.111 port 33212 Jan 23 01:12:25.585893 sshd-session[5177]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:25.601421 systemd[1]: sshd@9-10.230.15.178:22-20.161.92.111:33212.service: Deactivated successfully. Jan 23 01:12:25.608064 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:12:25.613268 systemd-logind[1569]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:12:25.616123 systemd-logind[1569]: Removed session 12. Jan 23 01:12:28.630234 containerd[1589]: time="2026-01-23T01:12:28.630176182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:12:28.971364 containerd[1589]: time="2026-01-23T01:12:28.970994229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:28.972759 containerd[1589]: time="2026-01-23T01:12:28.972594538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:12:28.972759 containerd[1589]: time="2026-01-23T01:12:28.972713259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:28.972989 kubelet[2899]: E0123 01:12:28.972927 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:12:28.975894 kubelet[2899]: E0123 01:12:28.973014 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:12:28.975894 kubelet[2899]: E0123 01:12:28.973149 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-f4t5v_calico-system(6ae36994-0284-456d-8619-5a1f2ff25c95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:28.975894 kubelet[2899]: E0123 01:12:28.973198 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f4t5v" podUID="6ae36994-0284-456d-8619-5a1f2ff25c95" Jan 23 01:12:29.623758 containerd[1589]: time="2026-01-23T01:12:29.623695166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:12:29.944709 containerd[1589]: time="2026-01-23T01:12:29.944443100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:29.946688 containerd[1589]: time="2026-01-23T01:12:29.946559316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:12:29.946688 containerd[1589]: time="2026-01-23T01:12:29.946608831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:29.946997 kubelet[2899]: E0123 01:12:29.946914 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:29.947083 kubelet[2899]: E0123 01:12:29.947011 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:29.947211 kubelet[2899]: E0123 01:12:29.947166 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57f7549777-v6lv7_calico-apiserver(236b218f-d8af-4e9e-b6b6-8f9ea312a2ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:29.947278 kubelet[2899]: E0123 01:12:29.947235 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57f7549777-v6lv7" podUID="236b218f-d8af-4e9e-b6b6-8f9ea312a2ce" Jan 23 01:12:30.694845 systemd[1]: Started sshd@10-10.230.15.178:22-20.161.92.111:33226.service - OpenSSH per-connection server daemon (20.161.92.111:33226). Jan 23 01:12:31.325013 sshd[5194]: Accepted publickey for core from 20.161.92.111 port 33226 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:12:31.327875 sshd-session[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:31.339757 systemd-logind[1569]: New session 13 of user core. Jan 23 01:12:31.345632 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:12:31.624630 containerd[1589]: time="2026-01-23T01:12:31.624488412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:12:31.920905 sshd[5197]: Connection closed by 20.161.92.111 port 33226 Jan 23 01:12:31.920700 sshd-session[5194]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:31.929586 systemd[1]: sshd@10-10.230.15.178:22-20.161.92.111:33226.service: Deactivated successfully. Jan 23 01:12:31.933717 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:12:31.936587 systemd-logind[1569]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:12:31.940187 systemd-logind[1569]: Removed session 13. Jan 23 01:12:31.941564 containerd[1589]: time="2026-01-23T01:12:31.941520055Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:31.943487 containerd[1589]: time="2026-01-23T01:12:31.942728217Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:12:31.943487 containerd[1589]: time="2026-01-23T01:12:31.942852152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:12:31.943659 kubelet[2899]: E0123 01:12:31.943037 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:12:31.943659 kubelet[2899]: E0123 01:12:31.943133 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:12:31.943659 kubelet[2899]: E0123 01:12:31.943375 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-55994859c6-2x5qp_calico-system(bc145d36-eea8-4680-ac11-0b79793cc035): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:31.943659 kubelet[2899]: E0123 01:12:31.943551 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" podUID="bc145d36-eea8-4680-ac11-0b79793cc035" Jan 23 01:12:31.944917 containerd[1589]: time="2026-01-23T01:12:31.944643339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:12:32.300761 containerd[1589]: time="2026-01-23T01:12:32.300640192Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:32.301892 containerd[1589]: time="2026-01-23T01:12:32.301843459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:12:32.301992 containerd[1589]: time="2026-01-23T01:12:32.301961682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:12:32.302305 kubelet[2899]: E0123 01:12:32.302245 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:12:32.302417 kubelet[2899]: E0123 01:12:32.302321 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:12:32.302552 kubelet[2899]: E0123 01:12:32.302452 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b94489fd9-glnbg_calico-system(ed71740a-4cd8-4c4d-959e-402af8a98785): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:32.304640 containerd[1589]: time="2026-01-23T01:12:32.304606893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:12:32.630929 containerd[1589]: time="2026-01-23T01:12:32.630733698Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:32.632538 containerd[1589]: time="2026-01-23T01:12:32.632003922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:12:32.632538 containerd[1589]: time="2026-01-23T01:12:32.632025431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:12:32.632648 kubelet[2899]: E0123 01:12:32.632342 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:12:32.632648 kubelet[2899]: E0123 01:12:32.632429 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:12:32.632648 kubelet[2899]: E0123 01:12:32.632551 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b94489fd9-glnbg_calico-system(ed71740a-4cd8-4c4d-959e-402af8a98785): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:32.632811 kubelet[2899]: E0123 01:12:32.632609 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b94489fd9-glnbg" podUID="ed71740a-4cd8-4c4d-959e-402af8a98785" Jan 23 01:12:32.634065 containerd[1589]: time="2026-01-23T01:12:32.633524877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:12:32.978900 containerd[1589]: time="2026-01-23T01:12:32.978681716Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:32.981705 containerd[1589]: time="2026-01-23T01:12:32.981592672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:12:32.981705 containerd[1589]: time="2026-01-23T01:12:32.981643210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:32.982102 kubelet[2899]: E0123 01:12:32.982036 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:32.983855 kubelet[2899]: E0123 01:12:32.983447 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:32.984403 kubelet[2899]: E0123 01:12:32.983643 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-769444c77-6h5s4_calico-apiserver(d38a34ac-d16c-44a2-b363-28d164fb855d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:32.984403 kubelet[2899]: E0123 01:12:32.984113 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-6h5s4" podUID="d38a34ac-d16c-44a2-b363-28d164fb855d" Jan 23 01:12:34.624025 containerd[1589]: time="2026-01-23T01:12:34.622664704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:12:34.961776 containerd[1589]: time="2026-01-23T01:12:34.961414591Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:34.962824 containerd[1589]: time="2026-01-23T01:12:34.962695126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:12:34.962824 containerd[1589]: time="2026-01-23T01:12:34.962756610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:34.963068 kubelet[2899]: E0123 01:12:34.962982 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:34.963546 kubelet[2899]: E0123 01:12:34.963064 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:34.963546 kubelet[2899]: E0123 01:12:34.963247 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-769444c77-774wh_calico-apiserver(da5b2d2c-13cd-4988-8a1e-436e3c779260): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:34.963672 kubelet[2899]: E0123 01:12:34.963534 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-774wh" podUID="da5b2d2c-13cd-4988-8a1e-436e3c779260" Jan 23 01:12:37.025824 systemd[1]: Started sshd@11-10.230.15.178:22-20.161.92.111:44624.service - OpenSSH per-connection server daemon (20.161.92.111:44624). Jan 23 01:12:37.632518 containerd[1589]: time="2026-01-23T01:12:37.632452334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:12:37.637715 sshd[5217]: Accepted publickey for core from 20.161.92.111 port 44624 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:12:37.642314 sshd-session[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:37.656749 systemd-logind[1569]: New session 14 of user core. Jan 23 01:12:37.663130 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:12:37.939330 containerd[1589]: time="2026-01-23T01:12:37.939149220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:37.940747 containerd[1589]: time="2026-01-23T01:12:37.940691725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:12:37.941001 containerd[1589]: time="2026-01-23T01:12:37.940875840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:12:37.941306 kubelet[2899]: E0123 01:12:37.941227 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:12:37.941814 kubelet[2899]: E0123 01:12:37.941309 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:12:37.941814 kubelet[2899]: E0123 01:12:37.941447 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-2q95q_calico-system(ac789593-88de-4afb-9cdb-f9323fe8cb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:37.945738 containerd[1589]: time="2026-01-23T01:12:37.945691049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:12:38.161172 sshd[5220]: Connection closed by 20.161.92.111 port 44624 Jan 23 01:12:38.162084 sshd-session[5217]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:38.170621 systemd[1]: sshd@11-10.230.15.178:22-20.161.92.111:44624.service: Deactivated successfully. Jan 23 01:12:38.173948 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:12:38.176886 systemd-logind[1569]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:12:38.179937 systemd-logind[1569]: Removed session 14. Jan 23 01:12:38.262778 containerd[1589]: time="2026-01-23T01:12:38.262617216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:38.266738 systemd[1]: Started sshd@12-10.230.15.178:22-20.161.92.111:44638.service - OpenSSH per-connection server daemon (20.161.92.111:44638). Jan 23 01:12:38.272468 containerd[1589]: time="2026-01-23T01:12:38.270808630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:12:38.272468 containerd[1589]: time="2026-01-23T01:12:38.270862399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:12:38.273438 kubelet[2899]: E0123 01:12:38.272800 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:12:38.273438 kubelet[2899]: E0123 01:12:38.272889 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:12:38.273438 kubelet[2899]: E0123 01:12:38.273011 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-2q95q_calico-system(ac789593-88de-4afb-9cdb-f9323fe8cb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:38.274593 kubelet[2899]: E0123 01:12:38.273149 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:12:38.863909 sshd[5233]: Accepted publickey for core from 20.161.92.111 port 44638 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:12:38.866450 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:38.876643 systemd-logind[1569]: New session 15 of user core. Jan 23 01:12:38.884715 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:12:39.526582 sshd[5236]: Connection closed by 20.161.92.111 port 44638 Jan 23 01:12:39.530182 sshd-session[5233]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:39.536130 systemd[1]: sshd@12-10.230.15.178:22-20.161.92.111:44638.service: Deactivated successfully. Jan 23 01:12:39.539163 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:12:39.540864 systemd-logind[1569]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:12:39.544095 systemd-logind[1569]: Removed session 15. Jan 23 01:12:39.645677 systemd[1]: Started sshd@13-10.230.15.178:22-20.161.92.111:44640.service - OpenSSH per-connection server daemon (20.161.92.111:44640). Jan 23 01:12:39.648406 kubelet[2899]: E0123 01:12:39.646643 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f4t5v" podUID="6ae36994-0284-456d-8619-5a1f2ff25c95" Jan 23 01:12:40.383918 sshd[5248]: Accepted publickey for core from 20.161.92.111 port 44640 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:12:40.385190 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:40.394819 systemd-logind[1569]: New session 16 of user core. Jan 23 01:12:40.401941 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:12:41.051036 sshd[5252]: Connection closed by 20.161.92.111 port 44640 Jan 23 01:12:41.049862 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:41.056634 systemd[1]: sshd@13-10.230.15.178:22-20.161.92.111:44640.service: Deactivated successfully. Jan 23 01:12:41.060946 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:12:41.067328 systemd-logind[1569]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:12:41.070014 systemd-logind[1569]: Removed session 16. Jan 23 01:12:42.621728 kubelet[2899]: E0123 01:12:42.621645 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57f7549777-v6lv7" podUID="236b218f-d8af-4e9e-b6b6-8f9ea312a2ce" Jan 23 01:12:43.634773 kubelet[2899]: E0123 01:12:43.634715 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b94489fd9-glnbg" podUID="ed71740a-4cd8-4c4d-959e-402af8a98785" Jan 23 01:12:46.131740 systemd[1]: Started sshd@14-10.230.15.178:22-20.161.92.111:43594.service - OpenSSH per-connection server daemon (20.161.92.111:43594). Jan 23 01:12:46.731496 sshd[5264]: Accepted publickey for core from 20.161.92.111 port 43594 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:12:46.734042 sshd-session[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:46.745470 systemd-logind[1569]: New session 17 of user core. Jan 23 01:12:46.751367 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:12:47.345505 sshd[5267]: Connection closed by 20.161.92.111 port 43594 Jan 23 01:12:47.347343 sshd-session[5264]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:47.354805 systemd[1]: sshd@14-10.230.15.178:22-20.161.92.111:43594.service: Deactivated successfully. Jan 23 01:12:47.360876 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:12:47.364927 systemd-logind[1569]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:12:47.370846 systemd-logind[1569]: Removed session 17. Jan 23 01:12:47.623402 kubelet[2899]: E0123 01:12:47.622656 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" podUID="bc145d36-eea8-4680-ac11-0b79793cc035" Jan 23 01:12:48.622614 kubelet[2899]: E0123 01:12:48.622477 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-774wh" podUID="da5b2d2c-13cd-4988-8a1e-436e3c779260" Jan 23 01:12:48.622614 kubelet[2899]: E0123 01:12:48.622523 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-6h5s4" podUID="d38a34ac-d16c-44a2-b363-28d164fb855d" Jan 23 01:12:50.623235 kubelet[2899]: E0123 01:12:50.623158 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:12:52.451715 systemd[1]: Started sshd@15-10.230.15.178:22-20.161.92.111:34846.service - OpenSSH per-connection server daemon (20.161.92.111:34846). Jan 23 01:12:52.623889 kubelet[2899]: E0123 01:12:52.623549 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f4t5v" podUID="6ae36994-0284-456d-8619-5a1f2ff25c95" Jan 23 01:12:53.054361 sshd[5313]: Accepted publickey for core from 20.161.92.111 port 34846 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:12:53.057483 sshd-session[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:53.067132 systemd-logind[1569]: New session 18 of user core. Jan 23 01:12:53.076639 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:12:53.588027 sshd[5316]: Connection closed by 20.161.92.111 port 34846 Jan 23 01:12:53.587673 sshd-session[5313]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:53.595186 systemd[1]: sshd@15-10.230.15.178:22-20.161.92.111:34846.service: Deactivated successfully. Jan 23 01:12:53.596205 systemd-logind[1569]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:12:53.600461 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:12:53.605873 systemd-logind[1569]: Removed session 18. Jan 23 01:12:56.621410 kubelet[2899]: E0123 01:12:56.621287 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57f7549777-v6lv7" podUID="236b218f-d8af-4e9e-b6b6-8f9ea312a2ce" Jan 23 01:12:56.622897 kubelet[2899]: E0123 01:12:56.622624 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b94489fd9-glnbg" podUID="ed71740a-4cd8-4c4d-959e-402af8a98785" Jan 23 01:12:58.695465 systemd[1]: Started sshd@16-10.230.15.178:22-20.161.92.111:34860.service - OpenSSH per-connection server daemon (20.161.92.111:34860). Jan 23 01:12:59.324556 sshd[5332]: Accepted publickey for core from 20.161.92.111 port 34860 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:12:59.325503 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:59.335654 systemd-logind[1569]: New session 19 of user core. Jan 23 01:12:59.344635 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:12:59.893426 sshd[5335]: Connection closed by 20.161.92.111 port 34860 Jan 23 01:12:59.895752 sshd-session[5332]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:59.903685 systemd[1]: sshd@16-10.230.15.178:22-20.161.92.111:34860.service: Deactivated successfully. Jan 23 01:12:59.911265 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:12:59.914888 systemd-logind[1569]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:12:59.917509 systemd-logind[1569]: Removed session 19. Jan 23 01:13:00.002878 systemd[1]: Started sshd@17-10.230.15.178:22-20.161.92.111:34868.service - OpenSSH per-connection server daemon (20.161.92.111:34868). Jan 23 01:13:00.622458 kubelet[2899]: E0123 01:13:00.622151 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" podUID="bc145d36-eea8-4680-ac11-0b79793cc035" Jan 23 01:13:00.641002 sshd[5346]: Accepted publickey for core from 20.161.92.111 port 34868 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:13:00.644895 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:00.656250 systemd-logind[1569]: New session 20 of user core. Jan 23 01:13:00.663745 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:13:01.505875 sshd[5349]: Connection closed by 20.161.92.111 port 34868 Jan 23 01:13:01.512007 sshd-session[5346]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:01.525484 systemd[1]: sshd@17-10.230.15.178:22-20.161.92.111:34868.service: Deactivated successfully. Jan 23 01:13:01.529218 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:13:01.533872 systemd-logind[1569]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:13:01.536807 systemd-logind[1569]: Removed session 20. Jan 23 01:13:01.612748 systemd[1]: Started sshd@18-10.230.15.178:22-20.161.92.111:34876.service - OpenSSH per-connection server daemon (20.161.92.111:34876). Jan 23 01:13:01.628106 kubelet[2899]: E0123 01:13:01.628019 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:13:02.239815 sshd[5359]: Accepted publickey for core from 20.161.92.111 port 34876 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:13:02.241862 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:02.251473 systemd-logind[1569]: New session 21 of user core. Jan 23 01:13:02.258770 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:13:02.623429 kubelet[2899]: E0123 01:13:02.621938 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-774wh" podUID="da5b2d2c-13cd-4988-8a1e-436e3c779260" Jan 23 01:13:03.623441 kubelet[2899]: E0123 01:13:03.623157 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-6h5s4" podUID="d38a34ac-d16c-44a2-b363-28d164fb855d" Jan 23 01:13:03.675847 sshd[5362]: Connection closed by 20.161.92.111 port 34876 Jan 23 01:13:03.679845 sshd-session[5359]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:03.688251 systemd[1]: sshd@18-10.230.15.178:22-20.161.92.111:34876.service: Deactivated successfully. Jan 23 01:13:03.694353 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:13:03.697791 systemd-logind[1569]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:13:03.701212 systemd-logind[1569]: Removed session 21. Jan 23 01:13:03.781690 systemd[1]: Started sshd@19-10.230.15.178:22-20.161.92.111:53744.service - OpenSSH per-connection server daemon (20.161.92.111:53744). Jan 23 01:13:04.403058 sshd[5377]: Accepted publickey for core from 20.161.92.111 port 53744 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:13:04.406111 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:04.417227 systemd-logind[1569]: New session 22 of user core. Jan 23 01:13:04.423914 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:13:05.199416 sshd[5380]: Connection closed by 20.161.92.111 port 53744 Jan 23 01:13:05.202004 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:05.207564 systemd[1]: sshd@19-10.230.15.178:22-20.161.92.111:53744.service: Deactivated successfully. Jan 23 01:13:05.211167 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:13:05.212961 systemd-logind[1569]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:13:05.216212 systemd-logind[1569]: Removed session 22. Jan 23 01:13:05.299965 systemd[1]: Started sshd@20-10.230.15.178:22-20.161.92.111:53760.service - OpenSSH per-connection server daemon (20.161.92.111:53760). Jan 23 01:13:05.886406 sshd[5392]: Accepted publickey for core from 20.161.92.111 port 53760 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:13:05.888284 sshd-session[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:05.896682 systemd-logind[1569]: New session 23 of user core. Jan 23 01:13:05.903584 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:13:06.440725 sshd[5395]: Connection closed by 20.161.92.111 port 53760 Jan 23 01:13:06.442709 sshd-session[5392]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:06.448970 systemd-logind[1569]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:13:06.449933 systemd[1]: sshd@20-10.230.15.178:22-20.161.92.111:53760.service: Deactivated successfully. Jan 23 01:13:06.454617 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:13:06.458080 systemd-logind[1569]: Removed session 23. Jan 23 01:13:07.623593 kubelet[2899]: E0123 01:13:07.622221 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f4t5v" podUID="6ae36994-0284-456d-8619-5a1f2ff25c95" Jan 23 01:13:10.624153 kubelet[2899]: E0123 01:13:10.624065 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b94489fd9-glnbg" podUID="ed71740a-4cd8-4c4d-959e-402af8a98785" Jan 23 01:13:11.546757 systemd[1]: Started sshd@21-10.230.15.178:22-20.161.92.111:53766.service - OpenSSH per-connection server daemon (20.161.92.111:53766). Jan 23 01:13:11.623404 containerd[1589]: time="2026-01-23T01:13:11.622732365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:13:11.944784 containerd[1589]: time="2026-01-23T01:13:11.944306774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:11.945823 containerd[1589]: time="2026-01-23T01:13:11.945717820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:13:11.946045 containerd[1589]: time="2026-01-23T01:13:11.945809242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:13:11.946497 kubelet[2899]: E0123 01:13:11.946445 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:13:11.947547 kubelet[2899]: E0123 01:13:11.946900 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:13:11.947547 kubelet[2899]: E0123 01:13:11.947017 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57f7549777-v6lv7_calico-apiserver(236b218f-d8af-4e9e-b6b6-8f9ea312a2ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:11.947547 kubelet[2899]: E0123 01:13:11.947066 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57f7549777-v6lv7" podUID="236b218f-d8af-4e9e-b6b6-8f9ea312a2ce" Jan 23 01:13:12.134213 sshd[5412]: Accepted publickey for core from 20.161.92.111 port 53766 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:13:12.136762 sshd-session[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:12.145230 systemd-logind[1569]: New session 24 of user core. Jan 23 01:13:12.153657 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 01:13:12.621899 containerd[1589]: time="2026-01-23T01:13:12.621830897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:13:12.622968 kubelet[2899]: E0123 01:13:12.622883 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:13:12.685923 sshd[5415]: Connection closed by 20.161.92.111 port 53766 Jan 23 01:13:12.686801 sshd-session[5412]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:12.698768 systemd[1]: sshd@21-10.230.15.178:22-20.161.92.111:53766.service: Deactivated successfully. Jan 23 01:13:12.707594 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 01:13:12.711611 systemd-logind[1569]: Session 24 logged out. Waiting for processes to exit. Jan 23 01:13:12.719852 systemd-logind[1569]: Removed session 24. Jan 23 01:13:12.948554 containerd[1589]: time="2026-01-23T01:13:12.948350944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:12.952325 containerd[1589]: time="2026-01-23T01:13:12.952181468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:13:12.952430 containerd[1589]: time="2026-01-23T01:13:12.952315090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:13:12.952695 kubelet[2899]: E0123 01:13:12.952608 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:13:12.953308 kubelet[2899]: E0123 01:13:12.952708 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:13:12.953308 kubelet[2899]: E0123 01:13:12.952830 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-55994859c6-2x5qp_calico-system(bc145d36-eea8-4680-ac11-0b79793cc035): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:12.953308 kubelet[2899]: E0123 01:13:12.952890 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" podUID="bc145d36-eea8-4680-ac11-0b79793cc035" Jan 23 01:13:13.624217 kubelet[2899]: E0123 01:13:13.624140 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-774wh" podUID="da5b2d2c-13cd-4988-8a1e-436e3c779260" Jan 23 01:13:17.791441 systemd[1]: Started sshd@22-10.230.15.178:22-20.161.92.111:58070.service - OpenSSH per-connection server daemon (20.161.92.111:58070). Jan 23 01:13:18.391395 sshd[5451]: Accepted publickey for core from 20.161.92.111 port 58070 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:13:18.392723 sshd-session[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:18.401185 systemd-logind[1569]: New session 25 of user core. Jan 23 01:13:18.409820 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 01:13:18.625993 containerd[1589]: time="2026-01-23T01:13:18.625928341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:13:18.937235 sshd[5454]: Connection closed by 20.161.92.111 port 58070 Jan 23 01:13:18.938778 sshd-session[5451]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:18.944564 containerd[1589]: time="2026-01-23T01:13:18.943730453Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:18.945158 containerd[1589]: time="2026-01-23T01:13:18.945109934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:13:18.945464 containerd[1589]: time="2026-01-23T01:13:18.945249421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:13:18.945806 kubelet[2899]: E0123 01:13:18.945741 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:13:18.948579 systemd[1]: sshd@22-10.230.15.178:22-20.161.92.111:58070.service: Deactivated successfully. Jan 23 01:13:18.949368 kubelet[2899]: E0123 01:13:18.947375 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:13:18.949919 kubelet[2899]: E0123 01:13:18.949696 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-769444c77-6h5s4_calico-apiserver(d38a34ac-d16c-44a2-b363-28d164fb855d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:18.949919 kubelet[2899]: E0123 01:13:18.949771 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-6h5s4" podUID="d38a34ac-d16c-44a2-b363-28d164fb855d" Jan 23 01:13:18.957720 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 01:13:18.962723 systemd-logind[1569]: Session 25 logged out. Waiting for processes to exit. Jan 23 01:13:18.965015 systemd-logind[1569]: Removed session 25. Jan 23 01:13:19.624826 containerd[1589]: time="2026-01-23T01:13:19.624590377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:13:19.941162 containerd[1589]: time="2026-01-23T01:13:19.940720801Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:19.942909 containerd[1589]: time="2026-01-23T01:13:19.942171860Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:13:19.942909 containerd[1589]: time="2026-01-23T01:13:19.942327961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:13:19.943024 kubelet[2899]: E0123 01:13:19.942664 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:13:19.943024 kubelet[2899]: E0123 01:13:19.942746 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:13:19.943024 kubelet[2899]: E0123 01:13:19.942890 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-f4t5v_calico-system(6ae36994-0284-456d-8619-5a1f2ff25c95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:19.943024 kubelet[2899]: E0123 01:13:19.942945 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f4t5v" podUID="6ae36994-0284-456d-8619-5a1f2ff25c95" Jan 23 01:13:24.044136 systemd[1]: Started sshd@23-10.230.15.178:22-20.161.92.111:59126.service - OpenSSH per-connection server daemon (20.161.92.111:59126). Jan 23 01:13:24.624830 containerd[1589]: time="2026-01-23T01:13:24.624753737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:13:24.636351 sshd[5478]: Accepted publickey for core from 20.161.92.111 port 59126 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:13:24.638022 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:24.651456 systemd-logind[1569]: New session 26 of user core. Jan 23 01:13:24.659596 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 01:13:24.952438 containerd[1589]: time="2026-01-23T01:13:24.951637213Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:24.959996 containerd[1589]: time="2026-01-23T01:13:24.959787194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:13:24.959996 containerd[1589]: time="2026-01-23T01:13:24.959933551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:13:24.960360 kubelet[2899]: E0123 01:13:24.960273 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:13:24.961375 kubelet[2899]: E0123 01:13:24.960375 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:13:24.963551 kubelet[2899]: E0123 01:13:24.961587 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-2q95q_calico-system(ac789593-88de-4afb-9cdb-f9323fe8cb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:24.964993 containerd[1589]: time="2026-01-23T01:13:24.964946567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:13:25.153625 sshd[5481]: Connection closed by 20.161.92.111 port 59126 Jan 23 01:13:25.154990 sshd-session[5478]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:25.164273 systemd[1]: sshd@23-10.230.15.178:22-20.161.92.111:59126.service: Deactivated successfully. Jan 23 01:13:25.169779 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 01:13:25.171404 systemd-logind[1569]: Session 26 logged out. Waiting for processes to exit. Jan 23 01:13:25.173758 systemd-logind[1569]: Removed session 26. Jan 23 01:13:25.276483 containerd[1589]: time="2026-01-23T01:13:25.276425883Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:25.278548 containerd[1589]: time="2026-01-23T01:13:25.278252442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:13:25.278548 containerd[1589]: time="2026-01-23T01:13:25.278399485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:13:25.279764 kubelet[2899]: E0123 01:13:25.278685 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:13:25.279764 kubelet[2899]: E0123 01:13:25.278768 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:13:25.279764 kubelet[2899]: E0123 01:13:25.279123 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-2q95q_calico-system(ac789593-88de-4afb-9cdb-f9323fe8cb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:25.280154 kubelet[2899]: E0123 01:13:25.279267 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2q95q" podUID="ac789593-88de-4afb-9cdb-f9323fe8cb8a" Jan 23 01:13:25.624623 kubelet[2899]: E0123 01:13:25.623785 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57f7549777-v6lv7" podUID="236b218f-d8af-4e9e-b6b6-8f9ea312a2ce" Jan 23 01:13:25.637762 containerd[1589]: time="2026-01-23T01:13:25.637696905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:13:25.942988 containerd[1589]: time="2026-01-23T01:13:25.941669429Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:25.943933 containerd[1589]: time="2026-01-23T01:13:25.943839751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:13:25.944124 containerd[1589]: time="2026-01-23T01:13:25.943848420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:13:25.945465 kubelet[2899]: E0123 01:13:25.944350 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:13:25.945558 kubelet[2899]: E0123 01:13:25.945488 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:13:25.945658 kubelet[2899]: E0123 01:13:25.945624 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b94489fd9-glnbg_calico-system(ed71740a-4cd8-4c4d-959e-402af8a98785): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:25.948972 containerd[1589]: time="2026-01-23T01:13:25.948866987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:13:26.263828 containerd[1589]: time="2026-01-23T01:13:26.262968188Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:26.264286 containerd[1589]: time="2026-01-23T01:13:26.264230684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:13:26.264377 containerd[1589]: time="2026-01-23T01:13:26.264353346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:13:26.265210 kubelet[2899]: E0123 01:13:26.264905 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:13:26.265210 kubelet[2899]: E0123 01:13:26.265154 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:13:26.266342 kubelet[2899]: E0123 01:13:26.265737 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b94489fd9-glnbg_calico-system(ed71740a-4cd8-4c4d-959e-402af8a98785): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:26.266342 kubelet[2899]: E0123 01:13:26.265833 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b94489fd9-glnbg" podUID="ed71740a-4cd8-4c4d-959e-402af8a98785" Jan 23 01:13:26.624972 kubelet[2899]: E0123 01:13:26.624793 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55994859c6-2x5qp" podUID="bc145d36-eea8-4680-ac11-0b79793cc035" Jan 23 01:13:26.625271 containerd[1589]: time="2026-01-23T01:13:26.624822223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:13:26.941857 containerd[1589]: time="2026-01-23T01:13:26.940928730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:26.945394 containerd[1589]: time="2026-01-23T01:13:26.945210420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:13:26.945394 containerd[1589]: time="2026-01-23T01:13:26.945334384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:13:26.945874 kubelet[2899]: E0123 01:13:26.945782 2899 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:13:26.946130 kubelet[2899]: E0123 01:13:26.945896 2899 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:13:26.946130 kubelet[2899]: E0123 01:13:26.946049 2899 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-769444c77-774wh_calico-apiserver(da5b2d2c-13cd-4988-8a1e-436e3c779260): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:26.946130 kubelet[2899]: E0123 01:13:26.946105 2899 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-769444c77-774wh" podUID="da5b2d2c-13cd-4988-8a1e-436e3c779260"